LAMMPS

The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large atomistic systems.  LAMMPS generally scales well on OSC platforms, provides a variety of modeling techniques, and offers GPU accelerated computation.

Availability and Restrictions

Versions

LAMMPS is available on all clusters. The following versions are currently installed at OSC:

Version Owens Pitzer Ascend Cardinal
14May16 P      
31Mar17 PC      
16Mar18 PC      
22Aug18 PC PC    
5Jun19 PC PC    
3Mar20 PC* PC*    
29Oct20 PC PC    
29Sep2021.3 PC PC PC*  
20220623.1     PC  
20230802.3       PC
* Current default version; S = serial executables; P = parallel; C = CUDA
*  IMPORTANT NOTE: You must load the correct compiler and MPI modules before you can load LAMMPS. To determine which modules you need, use module spider lammps/{version}.  Some LAMMPS versions are available with multiple compiler versions and MPI versions; in general, we recommend using the latest versions. (In particular, mvapich2/2.3.2 is recommended over 2.3.1 and 2.3; see the known issue.

You can use module spider lammps  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

LAMMPS is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Sandia National Lab., Open source

Usage

Usage on Owens

Set-up

To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use module load lammps/version . For example, use  module load lammps/14May16  to load LAMMPS version 14May16. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example:

module help lammps

Batch Usage

By connecting to owens.osc.edu you are logged into one of the login nodes which has computing resource limits. To gain access to the manifold resources on the cluster, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

sinteractive -A <project-account> -N 1 -n 28 -g 1 -t 00:20:00 

which requests one whole node with 28 cores ( -N 1 -n 28), for a walltime of 20 minutes ( -t 00:20:00 ), with one gpu (-g 1). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

~srb/workshops/compchem/lammps/

Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#!/bin/bash
#SBATCH --job-name=chain  
#SBATCH --nodes=2 --ntasks-per-node=28  
#SBATCH --time=10:00:00  
#SBATCH --account=<project-account>

module load lammps  
sbcast -p chain.in $TMPDIR/chain.in
cd $TMPDIR  
lammps < chain.in  
sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output

Usage on Pitzer

Set-up

To load the default version of LAMMPS module and set up your environment, use  module load lammps

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example:

module help lammps

Batch Usage

To access a cluster's main computational resources, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

sinteractive -A <project-account> -N 1 -n 48 -g 1 -t 00:20:00 

which requests one whole node with 28 cores ( -N 1 -n 48), for a walltime of 20 minutes ( -t 00:20:00 ), with one gpu (-g 1). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

~srb/workshops/compchem/lammps/

Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#!/bin/bash
#SBATCH --job-name=chain 
#SBATCH --nodes=2 --ntasks-per-node=48 
#SBATCH --time=10:00:00 
#SBATCH --account=<project-account>

module load lammps 
sbcast -p chain.in $TMPDIR/chain.in
cd $TMPDIR 
lammps < chain.in 
sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output

Known Issues

LAMMPS 14May16 velocity command problem on Owens

Updated: December 2016
Versions Affected: LAMMPS 14May16
LAMMPS 14May16 on Owens can hang when using the velocity command. Inputs that hang on Owens work on Oakley and Ruby. LAMMPS 31Mar17 on Owens also works. Here is an example failing input snippet:
velocity mobile create 298.0 111250 mom yes dist gaussian run 1000

Further Reading

Service: