LAMMPS

The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large molecular systems.  LAMMPS generally scales well on OSC platforms, offers a variety of modelling techniques, and offers GPU enhanced computation.

Availability & Restrictions

LAMMPS is available to all OSC users without restriction.

The following versions of LAMMPS are available on OSC systems:

Version Glenn Oakley
Oct06 X  
Jul07 X  
Jan08 X  
Apr08 X  
Sep09 X  
Mar10 X  
Jun10 X  
Aug10* X  
Oct10 X  
Mar11* X  
Jun11 X  
Jan12 X  
Feb12   X
May12   X

Usage

Set-up

To use LAMMPS on either Glenn or Oakley, run the following command to first set up your environment:

module load lammps

To see other available versions, run the following command:

module avail

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:

lammps < input.file

Batch Usage

Sample batch scripts and LAMMPS input files are available here:

/nfs/10/srb/workshops/compchem/lammps/

Below is a sample batch script for the Glenn Cluster. It asks for 16 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=8  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

GPU Usage

On both OSC clusters, LAMMPS can run on GPU's.  See the sample scripts for details.  The following text is specific to the Glenn cluster and demonstrates using GPU's to speed up certain pair_style's. This example shows how to load and run such a GPU-enabled version.  Here is a sample PBS script to run. It uses one node and two GPU's for the computation.

#PBS -N lammpsTest
#PBS -l nodes=1:ppn=8,feature=gpu
#PBS -l walltime=00:10:00
#PBS -S /bin/bash
#PBS -j oe

module switch mvapich2-1.4-gnu
module load cuda-3.0
module load fftw2-2.1.5-double-mvapich2-1.4-gnu
module load lammps-25Mar11cuda

cd $PBS_O_WORKDIR

cp lj-gpu.in $TMPDIR
cd $TMPDIR

mpiexec -np 2 lmp_osc < lj-gpu.in > out

cp $TMPDIR/* $PBS_O_WORKDIR

Here is a sample input with the necessary modifications to use a GPU pair_style, and it uses both GPU's.

newton off

units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 20 0 20 0 20
create_box 1 box
create_atoms 1 box
mass 1 1.0

velocity all create 1.44 87287 loop geom

pair_style lj/cut/gpu 2.5
pair_coeff 1 1 1.0 1.0 2.5

neighbor 0.3 bin
neigh_modify delay 0 every 20 check no

fix 0 all gpu force/neigh 0 1 1.0
fix 1 all nve

timestep 0.003
thermo 100
run 100

Please note that the you cannot run more than two threads per node. More than two threads will cause the application to hang, since there are only two GPU's per node. The LAMMPS input and the pbs script must match up with the number of GPU's being used.

Please refer to the documentation for more pair_style's that can be used in such simulations.

Further Reading

 

Supercomputer: 
Service: