At the heart of the Ohio Supercomputer Center are the supercomputers, mass storage systems and software applications. Collectively, OSC supercomputers provide a peak computing performance of 214 Teraflops – that’s the equivalent to everyone on earth performing over 30,000 calculations every second. Last year, more than 1,000 academic and healthcare researchers from across Ohio took advantage of OSC’s supercomputing and storage resources, consuming more than 85 million computing hours. These users depend on four key systems that are available at OSC: HP Intel Xeon Oakley Cluster, which provides clients with a total peak performance of 154 Teraflops of computing power, at 60 percent of the power consumption of previous systems. Oakley also has 4 gigabytes of memory per core, which is more than many national supercomputing centers. IBM AMD Opteron Glenn Cluster, which provides clients with a total peak performance of 60 Teraflops. Csuri Advanced GPU Environment, which leverages the unique computing properties of the Graphics Processing Units to provide a robust visualization environment. These GPUs are accessed through either the Oakley or Glenn clusters. Mass Storage Environment, containing more than 2 Petabytes of disk storage for a single, centralized point of control. Additionally, knowing that any computer – supercomputer or otherwise – is only as useful as the software that runs on it, the center provides licenses for more than 30 software applications and access to more than 70 different software packages. Researchers can run software for which they provide the license, as well. OpenFOAM for computational fluid dynamics, LS-DYNA for structural mechanics and Parallel MATLAB for numeric computation and visualization were among the most used software codes this past year. “We take great pride in meeting our users’ needs. We strive to help them be more successful in their research.” Doug Johnson Senior Systems Engineer Beyond providing these shared statewide resources, OSC works to create a user-focused, user-friendly environment. For example, Doug Johnson, senior systems engineer in HPC operations, and his colleagues established the Ruby Development Cluster, in partnership with Intel and The Ohio State University, to test Intel Xeon processor cards. A team led by David Hudak, Ph.D., program director for cyberinfrastructure and software development, created OnDemand, a web-based application that enables “point-and-click” access to the supercomputers. And, the user services support team regularly interacts with users, from sending system notifications to one-on-one coaching to improving specific codes.