GROMACS is a versatile package of molecular dynamics simulation programs. It is primarily designed for biochemical molecules, but it has also been used on non-biological systems. GROMACS generally scales well on OSC platforms. Starting with version 4.6 GROMACS includes GPU acceleration.
Availability and Restrictions
Versions
GROMACS is available on Pitzer and Owens Clusters. Both single and double precision executables are installed. The versions currently available at OSC are the following:
Version | Owens | Pitzer | Ascend | Cardinal | Notes |
---|---|---|---|---|---|
5.1.2 | SPC | Default version on Owens prior to 09/04/2018 | |||
2016.4 | SPC | ||||
2018.2 | SPC | SPC | |||
2020.2 | SPC* | SPC* | |||
2020.5 | SPC | SPC | |||
2022.1 | SPC | SPC | SPC* | ||
2023.2 | SPC | SPC | SPC | ||
2024.3 | SPC | SPC | SPC | SPC(GNU); SP(Intel) |
You can use module spider gromacs
to view available modules for a given cluster. To select a particular software version, use module load gromacs/version
. For example, use module load gromacs/2024.3
to load GROMACS version 2024.3; and after loading use module help gromacs/2024.3
to view details, such as, available executables (e.g., Intel builds on Cardinal do not have GPU executables), compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command module spider gromacs/version
. Feel free to contact OSC Help if you need other versions for your work.
Access
GROMACS is available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
http://www.gromacs.org/ Open source
Usage
Usage on Owens
Set-up
module load gromacs
. To select a particular software version, use module load gromacs/version
. For example, use module load gromacs/5.1.2
to load GROMACS version 5.1.2; and use module help gromacs/5.1.2
to view details, such as, compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command module spider gromacs/version
.Using GROMACS
To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:
gmx pdb2gmx
Parallel multinode GROMACS versions 5 programs should be run in a batch environment with srun, e.g.:
srun gmx_mpi_d mdrun
Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA). See the module help output for specific versions for more details on executable naming conventions.
Batch Usage
When you log into Owens you are actually connected to a login node. To access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Interactive Batch Session
For an interactive batch session on Owens, one can run the following command:sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00which gives you one node with 28 cores (
-N 1 -n 28
), with 1 hour (-t 1:00:00
). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:
~srb/workshops/compchem/gromacs/
This simple batch script demonstrates some important points:
#!/bin/bash # GROMACS Tutorial for Solvation Study of Spider Toxin Peptide # see fwspider_tutor.pdf #SBATCH --job-name fwsinvacuo.owens #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account=PZS0711 # turn off verbosity for noisy module commands set +vx module purge module load intel/18.0.3 module load mvapich2/2.3 module load gromacs/2018.2 module list set -vx cd $SLURM_SUBMIT_DIR echo $SLURM_SUBMIT_DIR sbcast -p 1OMB.pdb $TMPDIR/1OMB.pdb sbcast -p em.mdp $TMPDIR/em.mdp cd $TMPDIR mpiexec -ppn 1 gmx pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none mpiexec -ppn 1 gmx editconf -f fws.gro -d 0.7 mpiexec -ppn 1 gmx editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914 mpiexec -ppn 1 gmx grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr -maxwarn 1 mpiexec -ppn 1 ls -l mpiexec gmx_mpi mdrun -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr cp -p * $SLURM_SUBMIT_DIR/
* Note that sbcast does not recursively look through folders a loop in the jobscript is needed, please visit our Job Preparations page to learn more.
Usage on Pitzer
Set-up
module load gromacs
.Using GROMACS
To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:
gmx pdb2gmx
Parallel multinode GROMACS versions 5 programs should be run in a batch environment with srun, e.g.:
srun gmx_mpi_d mdrun
Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA). See the module help output for specific versions for more details on executable naming conventions.
Batch Usage
When you log into Pitzer you are actually connected to a login node. To access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Interactive Batch Session
For an interactive batch session on Pitzer, one can run the following command:sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00which gives you one node and 40 cores (-N 1 -n 40) with 1 hour (
-t 1:00:00
). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:
~srb/workshops/compchem/gromacs/
This simple batch script demonstrates some important points:
#!/bin/bash # GROMACS Tutorial for Solvation Study of Spider Toxin Peptide # see fwspider_tutor.pdf #SBATCH --job-name fwsinvacuo.owens #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account=PZS0711 # turn off verbosity for noisy module commands set +vx module purge module load intel/18.0.3 module load mvapich2/2.3 module load gromacs/2018.2 module list set -vx cd $SLURM_SUBMIT_DIR echo $SLURM_SUBMIT_DIR sbcast -p 1OMB.pdb $TMPDIR/1OMB.pdb sbcast -p em.mdp $TMPDIR/em.mdp cd $TMPDIR mpiexec -ppn 1 gmx pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none mpiexec -ppn 1 gmx editconf -f fws.gro -d 0.7 mpiexec -ppn 1 gmx editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914 mpiexec -ppn 1 gmx grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr -maxwarn 1 mpiexec -ppn 1 ls -l mpiexec gmx_mpi mdrun -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr cp -p * $SLURM_SUBMIT_DIR/
* Note that sbcast does not recursively look through folders a loop in the jobscript is needed, please visit our Job Preparations page to learn more