It is now possible to run Docker and Apptainer/Singularity containers on the Owens and Pitzer clusters at OSC. Single-node jobs are currently supported, including GPU jobs; MPI jobs are planned for the future.
From the Docker website: "A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."
As of June 21st, 2022, Singularity is replaced with Apptainer, which is just a renamed open-source project. For more information visit the Apptainer/Singularity page
This document will describe how to run Docker and Apptainer/Singularity containers on OSC clusters. You can use containers from Docker Hub, Sylabs Cloud, or any other source. As examples we will use hello-world
from Singularity Hub and ubuntu
from Docker Hub.
If you encounter an error then check the Known Issues on using Apptainer/Singularity at OSC. If the issue can not be resolved, please contact OSC help.
Contents
- Getting help
- Setting up your environment
- Access a container
- Run a container
- File system access
- GPU usage within a container
- Build a container
- References
Getting help
The most up-to-date help on Apptainer/Singularity comes from the command itself.
apptainer help
User guides and examples can be found in Apptainer documents.
Setting up your environment for Apptainer/Singularity usage
No setup is required. You can use Apptainer/Singularity directly on all clusters.
Accessing a container
An Apptainer/Singularity container is a single file with a .sif
extension.
You can simply download ("pull") a container from a hub. Popular hubs are Docker Hub and Singularity Hub. You can go there and search if they have a container that meets your needs. Docker Hub has more containers and is more up to date but supports a much wider community than just HPC. Singularity Hub is for HPC, but it has been archived. Additionally there are domain and vendor repositories such as biocontainers and NVIDIA AI and HPC containers that may have relevant containers.
Pull a container from hubs
Docker Hub
Pull from the 7.2.0 branch of the gcc repository on Docker Hub. The 7.2.0 is called a tag.
apptainer pull docker://gcc:7.2.0
Filename: gcc_7.2.0.sif
Pull an Ubuntu container from Docker Hub.
apptainer pull docker://ubuntu:18.04
Filename: ubuntu_18.04.sif
Singularity Hub
Pull the singularityhub/hello-world
container from the Singularity hub. Since no tag is specified it pulls from the master branch of the repository.
apptainer pull shub://singularityhub/hello-world
Filename: hello-world_latest.sif
Downloading containers from the hubs is not the only way to get one. You can, for example get a copy from your colleague's computer or directory. If you would like to create your own container you can start from the user guide below. If you have any questions, please contact OSC Help.
Running a container
There are four ways to run a container under Apptainer/Singularity.
You can do this either in a batch job or on a login node.
If unsure about the amount of memory that a singularity process will require, then be sure to request an entire node for the job. It is common for singularity jobs to be killed by the OOM killer because of using too much RAM.
We note that the operating system on Owens is Red Hat:
[owens-login01]$ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.5 (Maipo)" ID="rhel" [..more..]
In the examples below we will often check the operating system to show that we are really inside a container.
Run container like a native command
If you simply run the container image it will execute the container’s runscript.
Example: Run singularityhub/hello-world
Note that this container returns you to your native OS after you run it.
[owens-login01]$ ./hello-world_latest.sif Tacotacotaco
Use the “run” sub-command
The Apptainer “run” sub-command does the same thing as running a container directly as described above. That is, it executes the container’s runscript.
Example: Run a container from a local file
[owens-login01]$ apptainer run hello-world_latest.sif Tacotacotaco
Example: Run a container from a hub without explicitly downloading it
[owens-login01]$ apptainer run shub://singularityhub/hello-world INFO: Downloading shub image Progress |===================================| 100.0% Tacotacotaco
Use the “exec” sub-command
The Apptainer “exec” sub-command lets you execute an arbitrary command within your container instead of just the runscript.
Example: Find out what operating system the singularityhub/hello-world
container uses
[owens-login01]$ apptainer exec hello-world_latest.sif cat /etc/os-release NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" ID=ubuntu [..more..]
Use the “shell” sub-command
The Apptainer “shell” sub-command invokes an interactive shell within a container.
Example: Run an Ubuntu shell. Note the “Apptainer” prompt within the shell.
[owens-login01 ~]$ apptainer shell ubuntu_18.04.sif Singularity ubuntu_18.04.sif:~> cat /etc/os-release NAME="Ubuntu" VERSION="18.04 LTS (Bionic Beaver)" ID=ubuntu [.. more ..] Singularity ubuntu_18.04.sif:~> exit exit
File system access
When you use a container you run within the container’s environment. The directories available to you by default from the host environment are
- your home directory
- working directory (directory you were in when you ran the container)
/fs/ess
/tmp
You can review our Available File Systems page for more details about our file system access policy.
If you run the container within a job you will have the usual access to the $PFSDIR
environment variable with adding node attribute "pfsdir
" in the job request (--gres=pfsdir
). You can access most of our file systems from a container without any special treatment.
GPU usage within a container
If you have a GPU-enabled container you can easily run it on Owens or Pitzer just by adding the --nv
flag to the apptainer exec or run command. The example below comes from the "exec" command section of Apptainer User Guide. It runs a TensorFlow example using a GPU on Owens. (Output has been omitted from the example for brevity.)
[owens-login01]$ sinteractive -n 28 -g 1...
[o0756]$
git clone https://github.com/tensorflow/models.git
[o0756]$
apptainer exec --nv docker://tensorflow/tensorflow:latest-gpu \ python ./models/tutorials/image/mnist/convolutional.py
In some cases it may be necessary to bind the CUDA_HOME path and add $CUDA_HOME/lib64
to the shared library search path:
[owens-login01]$ sinteractive -n 28 -g 1...
[o0756]$
module load cuda [o0756]$ export APPTAINER
_BINDPATH=$CUDA_HOME [o0756]$ export APPTAINERENV_LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64[o0756]$
apptainer exec --nv my_container mycmd
Build a container
It is possible to build or create a custom container, but it will require additional setup. Please contact OSC support for more details.
References
- https://apptainer.org/docs
- https://github.com/ArangoGutierrez/Singularity-tutorial
- https://hpc.nih.gov/apps/singularity.html
- https://www.docker.com/