Prof. Gaitonde's Dedicated Compute

Dedicated compute services at OSC (also refered to as Condo model) involves users purchasing one or more compute nodes for the shared cluster while OSC provides the infrastructure, as well as maintenance and services. Prof. Gaitonde's Condo on Pitzer cluster is owned by Prof. Datta Gaitonde from Mechanical and Aerospace Engineering Department of Ohio State University.

Hardware

Detailed system specifications:

  • 48 standard dense compute nodes

    • 48 cores per node @ 2.9GHz

    • 192 GB of memory per node

    • 1 TB of local disk space per node

  • Dual Intel Xeon 8268s Cascade Lakes CPUs

  • Mellanox EDR (100Gbps) Infiniband networking

Connecting

Prof. Gaitonde's Condo is only accessible by users under project accounts: PCON0014, PCON0015 and PCON0016. Condo users are guaranteed access to their nodes within 4 hours of a job submission to their respective queue on the cluster, provided that resources are available.

Before getting access to the condo, you need to login to Ruby at OSC by connecting to the following hostname:

pitzer.osc.edu

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer.osc.edu

From there, you can run programs interactively (only for small and test jobs) or through batch requests. You get access to the condo by adding --account=PCON0000  in your request, where PCON0000 is replaced by your individual project code for Pitzer, such as PCON0015For more info on Pitzer Cluster, please refer to Pitzer Documentation Page.

OSC clusters are utilizing Slurm for job scheduling and resource management. Slurm, which stands for Simple Linux Utility for Resource Management, is a widely used open-source HPC resource management and scheduling system that originated at Lawrence Livermore National Laboratory. Please refer to this page for instructions on how to prepare and submit Slurm job scripts. A compatibility layer that allows users to submit batch PBS job scripts is also available. However, we encourage users to convert their Torque/Moab PBS batch scripts to Slurm.

For example, specify your project code by:

#SBATCH --account=PCON0015

To request 2 Pitzer nodes:

#SBATCH --nodes=2 --ntasks-per-node=48

File Systems

Prof. Gaitonde's Condo accesses the same OSC mass storage environment as our other clusters. Therefore, condo users have the same home directory as on the Pitzer and Owens clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

Users on the condo nodes have access to all software packages installed on Ruby Cluster. By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded. Use module load <package> to add a software package to your environment. Use module list to see what modules are currently loaded and module avail to see the modules that are available to load. To search for modules not be visible due to dependencies or conflicts, use module spider .

You can keep informed of the software packages that have been made available on Ruby by viewing the Software by System page and selecting the Ruby system.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment pageContact OSC Help if your have any other questions. 

Supercomputer: