The Ohio Supercomputer Center (OSC) is experiencing an email delivery problem with several types of messages from MyOSC. 

Cardinal

OSC's Cardinal cluster is slated to launch in 2024. 

 

Detailed system specifications:

  • 378 Dell Nodes, 39,312 total cores, 128 GPUs 

  • Dense Compute: 326 Dell PowerEdge C6620 two-socket servers, each with: 

    • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

    • 128 GB HBM2e and 512 GB DDR5 memory 

    • 1.6 TB NVMe local storage 

    • NDR200 Infiniband 

  • GPU Compute: 32 Dell PowerEdge XE9640 two-socket servers, each with: 

    • 2 Intel Xeon Platinum 8470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

    • 1 TB DDR5 memory 

    • 4 NVIDIA H100 (Hopper) GPUs each with 94 GB HBM2e memory and NVIDIA NVLink 

    • 12.8 TB NVMe local storage 

    • Four NDR400 Infiniband HCAs supporting GPUDirect 

  • Analytics: 16 Dell PowerEdge R660 two-socket servers, each with: 

    • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

    • 128 GB HBM2e and 2 TB DDR5 memory 

    • 12.8 TB NVMe local storage 

    • NDR200 Infiniband 

  • Login nodes: 4 Dell PowerEdge R660 two-socket servers, each with: 

    • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

    • 128 GB HBM and 1 TB DDR5 memory 

    • 3.2 TB NVMe local storage 

    • NDR200 Infiniband  

    • IP address: TBD 

  • ~10.5 PF Theoretical system peak performance  

    • ~8 PetaFLOPs (GPU) 

    • ~2.5 PetaFLOPS (CPU) 

  • 9 Physical racks, plus Two Coolant Distribution Units (CDUs) providing direct-to-the-chip liquid cooling for all nodes 

Service: