Storage Environment at OSC

OSC has over two petabytes (PB) of disk storage capacity distributed over several file systems, plus almost 2PB of backup tape storage. (A petabyte is 1015, or a quadrillion, bytes.) This guide describes the various storage environments, their characteristics, and their uses.

Service: 

Storage Hardware

The storage at OSC consists of servers, data storage subsystems, and networks providing a number of storage services to OSC HPC systems. The current configuration consists of:

  • NetApp CE5400 storage server
  • Hitachi AMS1000 storage
  • 2 DataDirect Networks 9900 storage
  • local disk storage on each compute node
  • One IBM 3584 tape robot:
    • 16 LTO tape drives
    • 1900 TB (raw capacity) of LTO tapes
  • 18 home directory servers with a total capacity of 360 TB
  • 16 project directory servers with a total capacity of 660 TB
  • 10 GPFS servers with total usable space of 400 TB
  • Lustre file system with 569 TB of usable space, EXAScaler/SFA10K DDN

 

Service: 

File System Usage

OSC has several different file systems where you can create files and directories. The characteristics of those systems and the policies associated with them determine their suitability for any particular purpose. This section describes the characteristics and policies that you should take into consideration in selecting a file system to use.

The various file systems are described in subsequent sections.

Visibility

Most of our file systems are shared. Directories and files on the shared file systems are accessible from all OSC HPC systems. By contrast, local storage is visible only on the node it is located on. Each compute node has a local disk with scratch file space.

Permanence

Some of our storage environments are intended for long-term storage; files are never deleted by the system or OSC staff. Some are intended as scratch space, with files deleted as soon as the associated job exits. Others fall somewhere in between, with expected data lifetimes of a few months to a couple of years.

Backup policies

Some of the file systems are backed up to tape; some are considered temporary storage and are not backed up. Backup schedules differ for different systems.

In no case do we make an absolute guarantee about our ability to recover data. Please read the official OSC data management policies for details. That said, we have never lost backed-up data and have rarely had an accidental loss of non-backed-up data.

Size/Quota

The permanent (backed-up) file systems all have quotas limiting the amount of file space and the number of files that each user or group can use. Your usage and quota information are displayed every time you log in to one of our HPC systems. You can also check them using the quota command. We encourage you to pay attention to these numbers because your file operations, and probably your compute jobs, will fail if you exceed them.

Scratch space on local disks doesn’t have a quota, but it is limited in size. If you have extremely large files, you will have to pay attention to the amount of local file space available on different compute nodes.

Performance

File systems have different performance characteristics including read/write speeds and behavior under heavy load. Performance matters a lot if you have I/O-intensive jobs. Choosing the right file system can have a significant impact on the speed and efficiency of your computations. You should never do heavy I/O in your home or project directories, for example.

Service: 

Available File Systems

Home Directories

Each user ID has a home directory on one of the NFS shared file systems. You have the same home directory regardless of what system you’re on, including all login nodes and all compute nodes, so your files are accessible everywhere. Most of your work in the login environment will be done in your home directory.

OSC currently has 18 home directory file servers. The absolute path to the home directory for user ID usr1234 will have the form /nfs/nn/usr1234, where nn is a 2-digit number. The environment variable $HOME is the absolute path to your home directory.

The default permissions on home directories for academic projects allow anyone with an OSC HPC account to read your files, although only you have write permission. You can change the permissions if you want to restrict access. Home directories for accounts on commercial projects are slightly more restrictive, and only allow the owning account and the project group to see the files by default.

Each user has a quota of 500 gigabytes (GB) of storage and 1,000,000 files. This quota cannot be increased. If you have many small files, you may reach the file limit before you reach the storage limit. In this case we encourage you to “tar” or “zip” your files or directories, creating an archive. If you approach your storage limit, you should delete any unneeded files and consider compressing your files using bzip or gzip. You can archive/unarchive/compress/uncompress your files inside a batch script, using scratch storage that is not subject to quotas, so your files are still conveniently usable. As always, contact OSC Help if you need assistance.

Home directories are considered permanent storage. Accounts that have been inactive for 18 months may be archived, but otherwise there is no automatic deletion of files.

All files in the home directories are backed up daily. Two copies of files in the home directories are written to tape in the tape library.

Access to home directories is relatively slow compared to local or parallel file systems. Batch jobs should not perform heavy I/O in the home directory tree because 1) it will slow down your job and 2) the home directory file servers don’t handle heavy loads gracefully. Instead you should copy your files to fast local storage and run your program there.

Project Directories

For projects that require more than 500GB storage and/or more than 1,000,000 files, additional storage space is available. Principal Investigators should contact OSC Help to request additional storage in the "project" space outside the home directory. Allocations of one to five terabytes are typical. Small allocations can be granted by OSC staff; for large allocations you will have to submit a proposal to the Statewide Users’ Group (SUG).

Project directories are created on one of the shared file systems, either NFS or GPFS. The absolute path to the project directory for project PRJ0123 will have one of the following forms: /nfs/projnn/PRJ0123 or /fs/gpfs/PRJ0123.

Default permissions on a project directory allow read and write access by all members of the group, with deletion restricted to the file owner. (OSC projects correspond to Linux groups.)

The quota on the project space is shared by all members of the project and corresponds to the allocation that was granted.  It is typically 1-5TB with a limit of 1,000,000 files.

Project space is allocated for a specific period of time, usually one to three years. At the end of that time you may apply for an extension.

All files in the project directories are backed up daily, with a single copy written to tape.

The recommendations for archiving and compressing files are the same for project directories as for home directories.

Comments about access speed and file server load for home directories apply also to project directories. Batch jobs should not perform heavy I/O in a project directory.

Local Disk

Each compute node has a local disk used for scratch storage. This space is not shared with any other system or node.

The batch system creates a temporary directory for each job on each node assigned to the job. The absolute path to this directory is in the environment variable $TMPDIR. The directory exists only for the duration of the job; it is automatically deleted by the batch system when the job ends. Temporary directories are not backed up.

$TMPDIR is a large area where users may execute codes that produce large intermediate files. Local storage has the highest performance of any of the file systems because data does not have to be sent across the network and handled by a file server. Typical usage is to copy input files, and possibly executable files, to $TMPDIR at the beginning of the job and copy output files to permanent storage at the end of the job. See the batch processing documentation for more information.
The size of the temporary file space on each Oakley node is 812GB; on Glenn it is 392GB.. This area is used for spool space for stdout and stderr from batch jobs as well as for $TMPDIR.  If your job requests less than the entire node, you will be sharing this space with other jobs, although each job has a unique directory in $TMPDIR.

Please use $TMPDIR and not /tmp on the compute nodes to ensure proper cleanup.

The login nodes have local scratch space in /tmp. This area is not backed up, and the system removes files last accessed more than 24 hours previously.

Parallel File System

OSC provides a Lustre parallel file system for use as high-performance, high-capacity, shared temporary space. The current capacity of the parallel file system is about 600TB.

The parallel file system is visible from all OSC HPC systems and all compute nodes at /fs/lustre. It can be used as either batch-managed scratch space or as user-managed temporary space. There is no quota on this system.

The Lustre system replaces the PVFS2 system that was previously available at OSC. There is no need for a special flag such as the :pvfs feature that was used in the past.

The parallel file system is temporary storage, so it is not backed up. Data stored on this system is not recoverable if it is lost for any reason, including user error or hardware failure. Data that has not been used in the last 180 days will be removed from the system.

The batch system creates a scratch directory for each job on the parallel file system. The absolute path to this directory is in the environment variable $PFSDIR. This directory is shared across nodes. It exists only for the duration of the job and is automatically deleted by the batch system when the job ends.

Users may also create their own directories under /fs/lustre. Please name the directory with either your user name or your project ID, for example, /fs/lustre/usr1234 or /fs/lustre/PRJ0123. This is a good place to store large amounts of temporary data that you need to keep for up to a few months. Files that have not been accessed for some period of time, currently six months, may be deleted. Check OSC’s data management policy for the official deletion schedule. While this system has been extremely reliable, it should be used only for data that you can regenerate or that you have another copy of. It is not backed up.

The parallel file system is a high performance file system that can handle high loads. It should be used by parallel jobs that perform heavy I/O and require a directory that is shared across all nodes. It is also suitable for jobs that require more scratch space than what is available locally. It should be noted that local disk access is faster than any shared file system, so it should be used whenever possible.

The Lustre file system is optimized for reads and writes that are done in large blocks, preferably at least 4MB. If you perform a lot of small operations, performance will be poor. Consequently, using lots of very small files will result in poor performance.

You should not store executables on the parallel file system. Keep program executables in your home or project directory or in $TMPDIR.

Service: