2015 Research Report banner

HPC Systems Services

With the April dedication of OSC’s newest cluster, the Ohio Supercomputer Center currently is offering researchers three mid-sized high performance computing (HPC) systems: the HP/Intel Xeon Phi Ruby Cluster, the HP/Intel Xeon Oakley Cluster and the IBM/AMD Opteron Glenn Cluster. OSC also is providing researchers with a storage environment with several petabytes of total capacity across a variety of file systems. With all of that already sitting on the floor at OSC’s nearby data center, many upgrades and installations await OSC clients in 2016.

THE RUBY CLUSTER

The seven racks of the new Ruby Cluster house 240 nodes and provide a total peak performance of more than 140 teraflops. We look at Ruby as a transitional system that features newer hardware than Oakley and additional capacity. Since the Glenn Cluster will have to be turned off to make room for the 2016 system, the value of the Ruby Cluster only increases. It will provide additional computational capacity while we are physically removing the Glenn Cluster and before the new system is available. The software environment on Ruby will also be a good springboard to the next system. 

The seven racks of the new Ruby Cluster house 240 nodes and provide a total peak performance of more than 140 teraflops. We look at Ruby as a transitional system that features newer hardware than Oakley and additional capacity. Since the Glenn Cluster will have to be turned off to make room for the 2016 system, the value of the Ruby Cluster only increases. It will provide additional computational capacity while we are physically removing the Glenn Cluster and before the new system is available. The software environment on Ruby will also be a good springboard to the next system. 

LOOKING AHEAD

In 2016, the remaining racks of the Glenn Cluster are expected to make way for a new system that will exceed the peak performance of all the center’s existing systems combined. Due to the significant increase in performance relative to the current systems, OSC will ensure that other facets of the infrastructure can keep pace with the new system.

We’ll be making enhancements to our external network connections. The core router that OSC uses to connect to the Internet through OARnet will be upgraded to support a 40-gigabit-per-second connection to OARnet. We will also upgrade our peer connection with The Ohio State University to the same speed. Both upgrades are designed for an eventual move to 100-Gbps connections to match what’s already available on the OARnet backbone. 

The coming year will see upgrades of all the storage at OSC. This includes upgrades and replacements for users’ home directories, project storage and global scratch-file systems. We will increase our total file system capacity to over five petabytes, with aggregate throughput performance of approximately 200 gigabytes per second. Other improvements have been made to our storage environment to respond to the needs of our user community. These included expansions to our backup systems to not only accommodate the additional storage, but to also provide user accessible archive to tape for long-term retention of data.