Cardinal ranked on global Green500 list for HPC energy efficiency

COLUMBUS, Ohio (Feb 27, 2025) — 

The Ohio Supercomputer Center’s (OSC) newest research computing cluster, Cardinal, has been ranked No. 15 on the Green500 list, which ranks the top energy-efficient high performance computing (HPC) systems around the globe.   

Cardinal, a Dell Technologies-based cluster launched in November 2024, saves power by employing a warm water, direct-to-chip liquid cooling system, manufactured by CoolIT Systems, on the cluster’s NVIDIA H100 Tensor Core graphics processing unit (GPU) chips. The cluster is the first in OSC’s data center to use this technology with GPUs, as previous systems had liquid cooling only for the central processing units (CPUs).

Cardinal Green 500_0.png
OSC launched the Cardinal cluster in November 2024 to meet the rising computational needs of artificial intelligence and machine learning work.

“OSC has strived to find innovative solutions to energy consumption and cooling issues that arise as we develop more powerful HPC clusters to meet client needs, which include a growing demand for artificial intelligence (AI), machine learning, simulation and data analysis support,” said Doug Johnson, OSC associate director.  

Prior to its Pitzer HPC cluster launch in 2018, OSC had used traditional methods of cooling its HPC systems: air conditioning units that pull chilled air into the computer racks and fans that dissipate the heat emanating from the hardware. However, OSC began running into limitations with cooling newer, high-energy research computing clusters in its data center, located at the State of Ohio Computer Center (SOCC) in Columbus, Ohio. 

“For every watt of power in our system that a cluster uses, the vast majority of that is dissipated into surrounding environment as heat,” Johnson said. “HPC clusters are very energy-hungry systems that can be expensive to maintain.” 

OSC made use of the SOCC’s capacity for a warm water system and began to explore direct-to-chip liquid cooling technology for its next-generation clusters. This approach involves moving warm water over computer chips on a cooling plate. As computer chips can reach temperatures of about 180 degrees Fahrenheit, even water that is as “cool” as 90 degrees can effectively temper the heat of the cluster.  

The technology is more efficient than the traditional air conditioning system, as it does not require a refrigeration cycle, and the specific heat capacity of water is thousands of times greater than air. OSC’s data center also can reduce its use of high-speed fans, Johnson said, which can account for almost 25% of a cluster’s power consumption.  

OSC was the first academic center in the U.S. to implement a warm water, direct-to-chip liquid cooling technology on a Dell system. More HPC centers are now incorporating the technology.  

This type of cooling system requires more investment upfront but has lower maintenance costs than traditional systems. Significant engineering work went into the design of Cardinal’s hardware, as the components needed to be made compatible with the chip cooling plates.  

Engineers also installed the coolant distribution system that taps into the SOCC’s warm water loop that runs hundreds of gallons of warm water per minute at high pressure through the floors. (Each GPU node receives approximately one gallon jug of water per minute.) The water loop exchanges heat with a secondary loop with liquid similar to antifreeze within the cluster. The water used in this process is conserved. Data center staff must monitor the system for any water leaks and periodically flush the pipes that run to and through the servers to avoid any biologic growth.  

As OSC regularly upgrades its HPC clusters to meet the rising technology needs of its higher education and commercial clients, it is imperative for the center to implement and maintain long-term solutions for its energy and cooling systems. Direct-to-chip liquid cooling can keep HPC resources running efficiently while helping the State of Ohio maintain affordable services for thousands of academic researchers, college students and companies. 

Cardinal, in addition to OSC’s Ascend cluster (2022, expansion in 2025), was designed to handle the resource-intensive work of AI, machine learning and data analysis. More disciplines across academia, as well as industry, are relying on OSC’s data center for research and educational courses on these topics. Examples include Ohio University, where digital artists use an OSC instance of Stable Diffusion software to explore incorporating AI tools into creative work, and Kent State University, where graduate students in the master’s program for AI use OSC resources to conduct research on natural language processing, cancer detection and deep learning-based image analysis.  

The Ohio Supercomputer Center (OSC) addresses the rising computational demands of academic and industrial research communities by providing a robust shared infrastructure and proven expertise in advanced modeling, simulation and analysis. OSC empowers scientists with the services essential to making extraordinary discoveries and innovations, partners with businesses and industry to leverage computational science as a competitive force in the global knowledge economy and leads efforts to equip the workforce with the key technology skills required for 21st century jobs.