Glenn

Photo: Image of the Glenn supercomputer

The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.

<--break->03/24/2016: Glenn Cluster has been retired. Please see our FAQ page.

Hardware

The hardware configuration consisted of the following:

  • 436 System x3455 compute nodes
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in /tmp
  • 2 System x3755 login nodes
    • Quad socket quad core 2.4 GHz Opterons
    • 64 GB RAM
  • Voltaire 20 Gbps PCI Express adapters

There were 36 GPU-capable nodes on Glenn, connected to 18 Quadro Plex S4's for a total of 72 CUDA-enabled graphics devices. Each node had access to two Quadro FX 5800-level graphics cards.

  • Each Quadro Plex S4 had these specs:
    • Each Quadro Plex S4 contains 4 Quadro FX 5800 GPUs
    • 240 cores per GPU
    • 4GB Memory per card
  • The 36 compute nodes in Glenn contained:
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in '/tmp'
    • 20Gb/s Infiniband ConnectX host channel adapater (HCA)

How to Connect

To connect to Glenn, ssh to glenn.osc.edu.

Batch Specifics

Refer to the documenation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • All compute nodes on Glenn are 8 cores/processors per node (ppn). Parallel jobs must use ppn=8.
  • If you need more than 24 GB of RAM per node, you will need to run your job on Oakley
  • GPU jobs must request whole nodes (ppn=8) and are allocated two GPUs each.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Service: