The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.
The hardware configuration consisted of the following:
There were 36 GPU-capable nodes on Glenn, connected to 18 Quadro Plex S4's for a total of 72 CUDA-enabled graphics devices. Each node had access to two Quadro FX 5800-level graphics cards.
To connect to Glenn, ssh to glenn.osc.edu.
Refer to the documenation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:
ppn=8
.For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.
It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs. On Glenn, it equates to 3GB/core and 24GB/node.
If your job requests less than a full node ( ppn<8
), it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (3GB/ core). For example, without any memory request ( mem=XX
), a job that requests nodes=1:ppn=1
will be assigned one core and should use no more than 3GB of RAM, a job that requests nodes=1:ppn=3
will be assigned 3 cores and should use no more than 9GB of RAM, and a job that requests nodes=1:ppn=8
will be assigned the whole node (8 cores) with 24GB of RAM. It is important to keep in mind that the memory limit ( mem=XX
) you set in PBS does not work the way one might expect it to on Glenn. It does not cause your job to be allocated the requested amount of memory, nor does it limit your job’s memory usage. For example, a job that requests nodes=1:ppn=1,mem=9GB
will be assigned one core (which means you should use no more than 3GB of RAM instead of the requested 9GB of RAM) and you were only charged for one core worth of Resource Units (RU).
A multi-node job ( nodes>1
) will be assigned the entire nodes with 24GB/node. Jobs requesting more than 24GB/node should be submitted to other clusters (Oakley or Ruby)
To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.
Here are the queues available on Glenn:
NAME |
MAX WALLTIME |
NOTES |
---|---|---|
Serial |
168 hours |
|
Longserial |
336 hours |
Restricted access |
Parallel |
96 hours |
|
An individual user can have up to 128 concurrently running jobs and/or up to 2048 processors/cores in use. All the users in a particular group/project can among them have up to 192 concurrently running jobs and/or up to 2048 processors/cores in use. Jobs submitted in excess of these limits are queued but blocked by the scheduler until other jobs exit and free up resources.
A user may have no more than 1,000 jobs submitted to a queue; parallel and serial job queues are treated separately. Jobs submitted in excess of these limits will be rejected.
Glenn retired from service on March 24, 2016.
To cite Glenn, please use the following Archival Resource Key:
ark:/19495/hpc1ph70
Here is the citation in BibTeX format:
@article{Glenn2009, ark = {ark:/19495/hpc1ph70}, url = {http://osc.edu/ark:/19495/hpc1ph70}, year = {2009}, author = {Ohio Supercomputer Center}, title = {Glenn supercomputer} }
And in EndNote format:
%0 Generic %T Glenn supercomputer %A Ohio Supercomputer Center %R ark:/19495/hpc1ph70 %U http://osc.edu/ark:/19495/hpc1ph70 %D 2009
Here is an .ris file to better suit your needs. Please change the import option to .ris.
The Glenn Cluster supercomputer has been retired from service to make way for a much more powerful new system. As an OSC client, the removal of the cluster will result in work being added to the heavy workload being handled by Oakley or Ruby, potentially impacting you.
Glenn is our oldest and least efficient cluster. In order to make room for our new cluster, to be installed later in 2016, we must remove Glenn to prepare the space and do some facilities work.
Demand for computing services is already high, and removing approximately 20 percent of our total FLOP capability will likely result in more time spent waiting in the queue. Please be patient with the time it takes for your jobs to run. We will be monitoring the queues, and may make scheduler adjustments to achieve better efficiency.
We expect the new cluster to be partially deployed by Aug. 1, and completely deployed by Oct. 1. Even in the partially deployed state, there will be more available nodes: 1.5x more cores, and roughly 3x more FLOPs available than there are today.
Please see our webpage: https://www.osc.edu/supercomputing/computing/c16.
We will help. Please let us know if you have any paper deadlines or similar requirements. We may be able to make some adjustments to make your work more efficient, or to help it get through the queue more quickly. To see what things you should consider if you have work to migrate off of Glenn, please visit https://www.osc.edu/glennretirement.
Please contact OSC Help, our 24/7 help desk.
Toll Free: (800) 686-6472
Local: (614) 292-1800
Email: oschelp@osc.edu
Here are the queues available on Glenn. Please note that you will be routed to the appropriate queue based on your walltime and job size request.
Name | Nodes available | max walltime | max job size | notes |
---|---|---|---|---|
Serial |
Available minus reservations |
168 hours |
1 node |
|
Longserial |
Available minus reservations |
336 hours |
1 node |
Restricted access |
Parallel |
Available minus reservations |
96 hours |
256 nodes |
|
Dedicated |
Entire cluster |
48 hours |
436 nodes |
Restricted access |
"Available minus reservations" means all nodes in the cluster currently operational (this will fluctuate slightly), less the reservations listed below. To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.
In addition, there are a few standing reservations.
Name | Times | Nodes Available | Max Walltime | Max job size | notes |
---|---|---|---|---|---|
Debug | 8AM-6PM Weekdays | 16 | 1 hour | 16 nodes | For small interactive and test jobs. |
GPU | ALL | 32 | 336 hours | 32 nodes |
Small jobs not requiring GPUs from the serial and parallel queues will backfill on this reservation. |
Occasionally, reservations will be created for specific projects that will not be reflected in these tables.