Search Documentation

Cray T3D MPP

Cray T3D MPPOn April 18, 1994, OSC engineers took delivery of a 32-processor Cray T3D MPP, an entry-level massively parallel processing system. Each of the processors included a DEC Alpha chip, eight megawords of memory and Cray-designed memory logic.

Cray Y-MP8/864

Cray Y-MP8/864In August 1989, OSC engineers completed the installation of the $22 million Cray Y-MP8/864 system, which was deemed the largest and fastest supercomputer in the world for a short time. The seven-ton system was able to calculate 200 times faster than many mainframes at that time and underwent several weeks of stress testing from “friendly users, who loaded the machine to 97 percent capacity for 17 consecutive days.

Cray X-MP/24

 

X-MP and OSC staffOhio State in the spring of 1987 budgeted $8.2 million over the next five years to lease a Cray X-MP/24, OSP’s first real supercomputer. The X-MP arrived June 1 at the OSU IRCC loading dock on Kinnear Road.

HOWTO: Submit multiple jobs using parameters

Often users want to submit a large amount of jobs all at once with each using different parameters for each job.  These parameters could be anything, including the path of a data file or different input values for a program.  This how-to will show you how you can do this using a simple python script, a CSV file, and a template script.  You will need to adapt this advice for your own situation.

Parallel netCDF

PnetCDF is a library providing high-performance parallel I/O while still maintaining file-format compatibility with  Unidata's NetCDF, specifically the formats of CDF-1 and CDF-2. Although NetCDF supports parallel I/O starting from version 4, the files must be in HDF5 format. PnetCDF is currently the only choice for carrying out parallel I/O on files that are in classic formats (CDF-1 and 2). In addition, PnetCDF supports the CDF-5 file format, an extension of CDF-2, that supports more data types and allows users to define large dimensions, attributes, and variables (>2B elements).

Decommissioned Supercomputers

OSC has operated a number of supercomputer systems over the years. Here is a list of previous machines and their specifications.

Statewide Users Group Agenda - Dec 4, 2014

Wednesday, Dec. 3th

4:00  Allocations Committee Meeting
Queues and Reservations

Here are the queues available on Ruby. Please note that you will be routed to the appropriate queue based on your walltime and job size request.

Name Nodes available max walltime max job size notes

Serial

Available minus reservations

168 hours

1 node

Executing Programs

Batch Requests

Batch requests are handled by the TORQUE resource manager and Moab Scheduler as on the Oakley and Glenn systems. Use the qsub command to submit a batch request, qstat to view the status of your requests, and qdel to delete unwanted requests. For more information, see the manual pages for each command.

There are some changes for Ruby, they are listed here:

Pages