Update Tue Feb 10th 11:30am -- This issue is resolved.  

There is a bug in the changes we made to a part of our batch software during the downtime. The bug is affecting some users when they submit jobs to our system.

GROMACS

GROMACS is a versatile package of molecular dynamics simulation programs. It is primarily designed for biochemical molecules, but it has also been used on non-biological systems.  GROMACS generally scales well on OSC platforms. Starting with version 4.6 GROMACS includes GPU acceleration.

Availability and Restrictions

GROMACS is available on Oakley, and Glenn Clusters. The versions currently available at OSC are the following (S means serial single node executables, P means parallel multinode, and C means CUDA, i.e., GPU enabled):

Version Glenn Oakley notes
3.3.1 S*    
3.3.3 S    
4.0.3 S    
4.5.4 S    
4.5.5 SP SP Default version on Oakley prior to 09/15/2015
4.6.3   SPC*  
5.1   SPC  
       
*: Current default version

You can use module avail gromacs  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

GROMACS is available to all OSC users without restriction.

Usage

Set-up on Glenn

To load the default version of Gaussian module/initalize your environement for use of GROMACS, use  module load gromacs . To select a particular software version, use   module load gromacs-version . For example, use  module load gromacs-4.5.5  to load GROMACS version 4.5.5 on Glenn. 

Using GROMACS

To execute a serial GROMACS program interactively, simply run it on the command line, e.g.:
pdb2gmx

Parallel GROMACS programs should be run in a batch environment with mpiexec, e.g.:

mpiexec mdrun_mpi_d

Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision.

Batch Usage on Glenn

When you log into glenn.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Glenn, one can run the following command:
qsub -I -l nodes=1:ppn=8 -l walltime=1:00:00
which gives you 8 cores ( -l nodes=1:ppn=8 ) with 1 hour ( -l walltime=1:00:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and GROMACS input files are available here:

/nfs/10/srb/workshops/compchem/gromacs/

Usage on Oakley

Set-up on Oakley

To load the default version of GROMACS module/initalize your environement for use of GROMACS, use  module load gromacs . To select a particular software version, use   module load gromacs/version . For example, use  module load gromacs/5.1  to load GROMACS version 5.1 on Oakley; and use  module help gromacs/5.1  to view details, such as, compiler prerequisites,  additional modules required for specific executables, etc.

Using GROMACS

The syntax of the GROMACS executable(s) changed in version 5.  To execute a serial GROMACS versions 4 program interactively, simply run it on the command line, e.g.:
pdb2gmx

To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:

gmx pdb2gmx

Note that some serial executables are enabled for single node parallelism via OpenMP.  Parallel multinode GROMACS versions 4 programs should be run in a batch environment with mpiexec, e.g.:

mpiexec mdrun_mpi_d

Parallel multinode GROMACS versions 5 programs should be run in a batch environment with mpiexec, e.g.:

mpiexec gmx_mpi_d mdrun

Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision (_gpu enotes a GPU executable built with CUDA).  See the module help output for specific versions for more details on executable name conventions.

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Oakley, one can run the following command:
qsub -I -l nodes=1:ppn=12 -l walltime=1:00:00
which gives you 12 cores ( -l nodes=1:ppn=12 ) with 1 hour ( -l walltime=1:00:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Gaussian input files are available here:

/nfs/10/srb/workshops/compchem/gromacs/

This simple batch script for Oakley demonstrates some important points:

# GROMACS Tutorial for Solvation Study of Spider Toxin Peptide
# see fwspider_tutor.pdf
#PBS -N fwsinvacuo.oakley
#PBS -l nodes=2:ppn=12
module load gromacs
# PBS_O_WORKDIR refers to the directory from which the job was submitted.
cd $PBS_O_WORKDIR
pbsdcp -p 1OMB.pdb em.mdp $TMPDIR
# Use TMPDIR for best performance.
cd $TMPDIR
pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none
editconf -f fws.gro -d 0.7
editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914
grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr
mpiexec mdrun_mpi -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr
cat em.log
cp -p * $PBS_O_WORKDIR/

Further Reading

Supercomputer: 
Service: