Hardware Specification
Below is a summary of the hardware information:
- 326 "dense compute" nodes (96 usable cores, 128 GB HBM2e and 512 GB DDR5 memory)
- 32 GPU nodes (96 usable cores, 1 TB DDR5 memory, 4 NVIDIA H100 GPUs each with 94 GB HBM2e memory and NVIDIA NVLink)
- 16 large memory nodes (96 usable cores, 128 GB HBM2e and 2 TB DDR5 memory)
See the Cardinal page and Technical Specifications page for more information.
File Systems
Cardinal accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory, project space, and scratch space as on the other clusters.
Software Environment
The Cardinal cluster runs on Red Hat Enterprise Linux (RHEL) 9, introducing several software-related changes compared to the RHEL 7 environment used on the Owens and Pitzer clusters. These updates provide access to modern tools and libraries but may also require adjustments to your workflows. Please refer to the Cardinal Software Environment page for key software changes and available software.
Cardinal uses the same module system as the other clusters.
Use module load <package>
to add a software package to your environment. Use module list
to see what modules are currently loaded and module avail
to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use module spider
.
You can keep up to on the software packages that have been made available on Cardinal by viewing the Software by System page and selecting the Cardinal system.
Programming Environment
The Cardinal cluster supports programming in C, C++, and Fortran. The available compiler suites include Intel, oneAPI, and GCC. Additionally, users have access to high-bandwidth memory (HBM), which is expected to enhance the performance of memory-bound applications. Other codes may also benefit from HBM, depending on their workload characteristics.
Please refer to the Cardinal Programming Environment page for details on compiler commands, parallel and GPU computing, and instructions on how to effectively utilize HBM.
Batch Specifics
The PBS compatibility layer is disabled on Cardinal so PBS batch scripts WON'T work on Cardinal, though it works on Owens and Pitzer clusters. In addition, you need to use sbatch
(instead of qsub
) command to submit jobs. Refer to the Slurm migration page to understand how to use Slurm and the batch limit page about scheduling policy during the Program.
Some specifics you will need to know to create well-formed batch scripts:
- Follow the Slurm job script page to convert the PBS batch scripts to Slurm scripts if you have not done so
- Refer to the job management page on how to manage and monitor jobs.
- Jobs may request partial nodes, including both serial (
node=1
) and multi-node (nodes>1
) jobs. -
Most dense compute nodes have the HBM configured in flat mode, but 4 nodes are configured in cache mode. Please refer to the HBM page on detailed discussions about flat and cache modes and the batch limit page on how to request different modes.