Oakley

Oakley is an HP-built, Intel® Xeon® processor-based supercomputer, featuring more cores (8,328) on half as many nodes (694) as the center’s former flagshipsystem, the IBM Opteron 1350 Glenn Cluster. The Oakley Cluster can achieve 88 teraflops, tech-speak for performing 88 trillion floating point operations per second, or, with acceleration from 128 NVIDIA® Tesla graphic processing units (GPUs), a total peak performance of just over 154 teraflops.

 

Hardware

Photo: OSC Oakley HP Intel Xeon ClusterDetailed system specifications:

  • 8,328 total cores
    • 12 cores/node  & 48 gigabytes of memory/node
  • Intel Xeon x5650 CPUs
  • HP SL390 G7 Nodes
  • 128 NVIDIA Tesla M2070 GPUs
  • 873 GB of local disk space in '/tmp'
  • QDR IB Interconnect
    • Low latency
    • High throughput
    • High quality-of-service.
  • Theoretical system peak performance
    • 88.6 teraflops
  • GPU acceleration
    • Additional 65.5 teraflops
  • Total peak performance
    • 154.1 teraflops
  • Memory Increase
    • Increases memory from 2.5 gigabytes per core to 4.0 gigabytes per core.
  • Storage Expansion
    • Adds 600 terabytes of DataDirect Networks Lustre storage for a total of nearly two petabytes of available disk storage.
  • System Efficiency
    • 1.5x the performance of former system at just 60 percent of current power consumption.

How to Connect

To connect to Oakley, ssh to oakley.osc.edu.

Batch Specifics

Refer to the documentation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • Compute nodes on Oakley are 12 cores/processors per node (ppn). Parallel jobs must use ppn=12.
  • If you need more than 48 GB of RAM per node, you may run on one of the 8 large memory (192 GB) nodes  on Oakley ("bigmem"). You can request a large memory node on Oakley by adding the following directive to your batch script: #PBS -l mem=192GB
  • We have a single huge memory node ("hugemem"), with 1 TB of RAM and 32 cores. You can schedule this node by adding the following directive to your batch script: #PBS -l nodes=1:ppn=32. This node is only for serial jobs, and can only have one job running on it at a time, so you must request the entire node to be scheduled on it. In addition, there is a walltime limit of 48 hours for jobs on this node.
Requesting less than 32 cores but a memory requirement greater than 192 GB will not schedule the 1 TB node! Just request nodes=1:ppn=32 with a walltime of 48 hours or less, and the scheduler will put you on the 1 TB node.
  • GPU jobs may request any number of cores and either 1 or 2 GPUs.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Supercomputer: 
Service: 

Oakley Changelog

Feb 2 2015 - 4:23pm

We have updated the default module of ANSYS on both Oakley and Glenn. The default module of ANSYS is ANSYS 14.5.7 on Oakley and ANSYS 14.5 on Glenn. For example, if you use module load ansys on Oakley, ANSYS 14.5.7 (instead of ANSYS 13.0) is loaded. We however keep the older versions of ANSYS installed on our cluster. Use module avail ansys to check all the available versions of ANSYS.

Feb 2 2015 - 4:19pm

We have updated the default module of FLUENT on both Oakley and Glenn. The default module of FLUENT is FLUENT15.0.7 on Oakley and FLUENT 14.5 on Glenn. For example, if you use module load fluent on Oakley, FLUENT 15.0.7 (instead of FLUENT 13.0) is loaded. We however keep the older versions of FLUENT installed on our cluster. Use module avail fluent to check all the available versions of FLUENT.

Nov 18 2014 - 4:01pm
  • FLUENT 15.0.7 has been installed on Oakley, and is available by loading the module fluent/15.0.7.
  • License is in place for V2F module to be used with FLUENT for academic users.
Oct 21 2014 - 11:20am

We have updated pbsdcp to fix a rare bug seen with certain MPI libraries.