HPC
Proposed OSC Policies for Public Comments
This page lists all proposed OSC policies for public comments. Your comments help inform our policies and are encouraged. We will provide the response to comments on this webpage after the public comment period closes. Please submit your comments via our online form by the deadline.
Currently Open for Public Comment:
Overview of File Systems
OSC has several different file systems where you can create files and directories. The characteristics of those systems and the policies associated with them determine their suitability for any particular purpose. This section describes the characteristics and policies that you should take into consideration in selecting a file system to use.
The various file systems are described in subsequent sections.
Technical Specifications
The following are technical specifications for Owens.
- Number of Nodes
-
824 nodes
- Number of CPU Sockets
-
1,648 (2 sockets/node)
- Number of CPU Cores
-
23,392 (28 cores/node)
- Cores Per Node
-
28 cores/node (48 cores/node for Huge Mem Nodes)
- Local Disk Space Per Node
-
~1,500GB in /tmp
Owens Programming Environment
Compilers
C, C++ and Fortran are supported on the Owens cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.
2016 Storage Service Upgrades
On July 12th, 2016 OSC migrated its old GPFS and Lustre filesystems to new Project and Scratch services, respectively. We've moved 1.22 PB of data, and the new capacities are 3.4 PB for Project, and 1.1 PB for Scratch. If you store data on these services, there are a few important details to note.
Citation
For more information about citations of OSC, visit https://www.osc.edu/citation.
To cite Owens, please use the following Archival Resource Key:
ark:/19495/hpc6h5b1
Please adjust this citation to fit the citation style guidelines required.
Ohio Supercomputer Center. 2016. Owens Supercomputer. Columbus, OH: Ohio Supercomputer Center. http://osc.edu/ark:19495/hpc6h5b1
Here is the citation in BibTeX format:
Messages from sbatch
sbatch messages
shell warning
Submitting a job without specifying the proper shell will return a warning like below:
sbatch: WARNING: Job script lacks first line beginning with #! shell. Injecting '#!/bin/bash' as first line of job script.
Errors
If an error is encountered, the job is rejected.
Not specifying a project account
It is required to specify an account for a job to run. Please use the --account=<project-code>
option to do this.
Owens
OSC's Owens cluster being installed in 2016 is a Dell-built, Intel® Xeon® processor-based supercomputer.
ParMETIS / METIS
ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis lab.
METIS (Serial Graph Partitioning and Fill-reducing Matrix Ordering) is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes developed in Karypis lab.