Ruby

Ruby is currently unavailable for general access.

The Ruby Transitional Cluster is a limited-access test cluster to explore Intel Xeon Phi accelerators and ideas we want to consider for our next production cluster. Ruby has 8 compute nodes, each with one Phi card.

NB: Ruby is currently a very dynamic environment, and changes are happening frequently as we continue development.

 

Hardware

Photo: OSC Oakley HP Intel Xeon ClusterDetailed system specifications:

  • 128 total cores
    • 16 cores/node  & 128 gigabytes of memory/node
  • Intel Xeon E5 2670 CPUs
  • HP SL250 Nodes
  • 8 Intel Xeon Phi 5110p accelerators
  • 1 TB of local disk space in '/tmp'
  • FDR IB Interconnect
    • Low latency
    • High throughput
    • High quality-of-service.
  • Two nodes also have NVIDIA K20X (Kepler) GPU cards (not yet available)
  • Two interactive nodes, configured the same as the compute nodes, but with Intel Xeon Phi 7120a cards instead of the 5110p cards.

How to Connect

To connect to Ruby, ssh to ruby.osc.edu. Access to Ruby is limited; please contact OSC Help to request access for your research group.

We will be adding a guide to using the Phi accelerators. In order to get access to run applications in "native mode", you will have to create SSH keys, and then ssh to the accelerator. To generate the keys, execute ssh-keygen and accept all the defaults. Then you want to copy the public key into the file "authorized_keys" in the ".ssh" folder:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

For security purposes, you want to make sure your ".ssh" folder is only readable by your user. To do this, you can execute chmod -R 700 ~/.ssh.

The cards themselves are running linux. The Xeon Phi cards on the login hosts,  ruby01 and ruby02, are available for interactive access. You can use ssh to access the cards directly, for example:

ssh mic0-ruby01

Each card is running linux and has access to nfs, so you can see files in your HPC home directory. 

Batch Specifics

Refer to the documentation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

The batch scheduler is not yet in place on Ruby.

Compiling for the Xeon Phis

The intel compilers should be used for codes executing on the Xeon Phi co-processor cards.

For 'native' execution use the '-mmic' flag.

icc -mmic -O2 -o pi.exe.mic pi.cc

For 'offload' mode, you must run an intel script to setup some environment variables, then compile with the '-openmp' flag. You can determine the compiler version by looking at the path returnd by the command which icc.

source /usr/local/intel/composer_xe_<year>_<version>/bin/compilervars.sh intel64
icc -opnemp -O2 -o pi.exe pi.cc

To control the number of threads used on the Xeon Phi cards during offload, set the environment variables as follows:

export MIC_OMP_NUM_THREADS=<number of threads>; export MIC_ENV_PREFIX=MIC

For more information on programming the Xeon Phi cards see the Intel documentation available at: http://software.intel.com/en-us/mic-developer

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Supercomputer: 
Service: