Search Documentation

Request Access

Projects who would like to use the Ruby cluster will need to request access.  This is because of the peticulars of the Ruby envionment, which includes its size, MICs, GPUs, and scheduling policies.  

CCAPP Condo on Ruby Cluster

Condo model refers to that the participants (condo owners) purchase one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. CCAPP Condo on Ruby cluster is owned by the Center for Cosmology and AstroParticle Physics, at OSU. Prof. Annika Peter has been heavily involved in specifying requirements.

Hardware

Detailed system specifications:

  • 21 total nodes

    • 20 cores per node

Prof. Gaitonde's Condo on Ruby Cluster

Condo model refers to that the participants (condo owners) purchase one or more compute nodes for the shared cluster while OSC provides the infrastructure, as well as maintenance and services. Prof. Gaitonde's Condo on Ruby cluster is owned by Prof. Datta Gaitonde from Mechanical and Aerospace Engineering Department of Ohio State University.

Hardware

Detailed system specifications:

  • 96 total nodes

    • 20 cores per node

SGI Altix 350
In October, 2004, OSC engineers installed three SGI Altix 350s. The Altix 350s featured 16-processors each for SMP and large-memory applications configured. They included 32GB of memory, 161.4 Gigahertz Intel Itanium2 processors, 4 Gigabit Ethernet interfaces, 2-Gigabit FibreChannel interfaces, and approximately 250 GB of temporary disk.
Cray XD1
The OSC-Springfield offices would officially open in April 2004. Over the next several months, OSC engineers would install the 16-MSP Cray X1 system, the Cray XD1 system and the 33-node Apple Xserve G5 Cluster at Springfield office. A 1-Gbit/s Ethernet WAN service linked the cluster to OSC’s remote-site hosts in Columbus. The G5 Cluster featured one front-end node configured with four gigabytes of RAM, two 2.0 gigahertz PowerPC G5 processors, 2-Gigabit Fibre Channel interfaces, approximately 750 gigabytes of local disk and about 12 terabytes of Fibre Channel attached storage.
Cray X1
 
PIV cluster

PIVIn December, 2003, OSC engineers installed a 512-CPU Pentium 4 Linux Cluster. Replacing the AMD Athlon cluster, the P4 doubled the existing system’s power with a sizable increase in speed. With a theoretical peak of 2,457 gigaflops, the P4 cluster contained 256 dual-processor Pentium IV Xeon systems with four gigabytes of memory per node and 20 terabytes of aggregate disk space.

SGI Altix 3700

SGI Altix 3700In September, 2003, OSC engineers installed a SGI Altix 3700 system to replace its SGI Origin 2000 system and to augment its HP Itanium 2 Cluster. The Altix was a non-uniform memory access system with 32 Itanium processors and 64 gigabytes of memory. The Altix featured Itanium 2 processors and runs the Linux operating system. OSC's HP Cluster also included Itanium 2 processors and runs Linux.

Itanium 2 cluster

In October, 2002, OSC engineers installed the 300-CPU HP Workstation Itanium 2 Linux zx6000 Cluster. OSC selected HP’s computing cluster for its blend of high performance, flexibility and low cost. The HP cluster used Myricom's Myrinet high-speed interconnect and ran the Red Hat Linux Advanced Workstation, a 64-bit Linux operating system.

Pages