Search Documentation

Search Documentation

Cardinal

Compilers

C, C++ and Fortran are supported on the Cardinal cluster. Intel, oneAPI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

Cardinal
This page is intended for users accepted into the Cardinal Early User Program (10/07/2024 - 11/03/2024). This page and its linked resources are actively being updated. Acceptance notifications were sent to PIs on October 3. The application period has now closed.
Cardinal

These are the public key fingerprints for Cardinal:

cardinal: ssh_host_rsa_key.pub = 73:f2:07:6c:76:b4:68:49:86:ed:ef:a3:55:90:58:1b
cardinal: ssh_host_ed25519_key.pub = 93:76:68:f0:be:f1:4a:89:30:e2:86:27:1e:64:9c:09
cardinal: ssh_host_ecdsa_key.pub = e0:83:14:8f:d4:c3:c5:6c:c6:b6:0a:f7:df:bc:e9:2e

PyTorch Fully Sharded Data Parallel (FSDP) is used to speed-up model training time by parallelizing training data as well as sharding model parameters, optimizer states, and gradients across multiple pytorch instances.

 

Pitzer

CUDA Quantum is a platform for developing quantum-classical applications that leverages NVIDIA's CUDA technology. This platform provides a framework to create and execute quantum algorithms on quantum processors while integrating with classical computing resources. It is designed to accelerate quantum computing tasks and support hybrid quantum-classical workflows, making it an essential tool for researchers and developers in the field of quantum computing.

Cardinal
We use Slurm syntax for all the discussions on this page. Please check how to prepare slurm job script if your script is prepared in PBS syntax. 

Memory limit

Cardinal

OSC's new Cardinal cluster is a heterogeneous system featuring Dell PowerEdge servers and the Intel® Xeon® CPU Max Series with high bandwidth memory (HBM) as the foundation to efficiently manage memory-bound HPC and AI workloads. Below is a summary of the hardware information:

Ascend, Cardinal

MVAPICH is a standard library for performing parallel processing using a distributed-memory model. 

PyTorch Distributed Data Parallel (DDP) is used to speed-up model training time by parallelizing training data across multiple identical model instances.

 

Owens, Pitzer

AutoDock is a a suite of automated docking tools. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure. AutoDock has applications in X-ray crystallography, structure-based drug design, lead optimization, etc.

Pages