LS-DYNA is a general purpose finite element code for simulating complex structural problems, specializing in nonlinear, transient dynamic problems using explicit integration. LS-DYNA is one of the codes developed at Livermore Software Technology Corporation (LSTC).
LS-DYNA is available on Cardinal Cluster for both serial (smp solver for single node jobs) and parallel (mpp solver for multipe node jobs) versions. The versions currently available at OSC are:
Version | Cardinal | |
---|---|---|
13.1.0 | smp | X |
mpp | X | |
15.0.2 | smp | X |
mpp | X |
You can use module spider ls-dyna
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.
Contact OSC Help for getting access to LS-DYNA if you are a commercial user.
LSTC, Commercial
To view available modules installed on Cardinal, use module spider ls-dyna
for smp solvers, and use module spider mpp
for mpp solvers. In the module name, '_s' indicates single precision and '_d' indicates double precision. For example, mpp-dyna/971_d_9.0.1 is the mpp solver with double precision on Cardinal. Use module load name
to load LS-DYNA with a particular software version. For example, use module load mpp-dyna/971_d_9.0.1
to load LS-DYNA mpp solver version 9.0.1 with double precision on Cardinal.
When you log into cardinal.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
For an interactive batch session one can run the following command:
sinteractive -A <project-account> -N 1 -n 28 -t 00:20:00 -L lsdyna@osc:28which requests one whole node with 28 cores (
-N 1 -n 28
), for a walltime of 20 minutes (-t 00:20:00
). You may adjust the numbers per your need.
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Please follow the steps below to use LS-DYNA via the batch system:
1) copy your input files (explorer.k
in the example below) to your work directory at OSC
2) create a batch script, similar to the following file, saved as job.txt
. It uses the smp solver for a serial job (nodes=1) on Owens:
#!/bin/bash #SBATCH --job-name=plate_test #SBATCH --time=5:00:00 #SBATCH --nodes=1 --ntasks-per-node=28 #SBATCH --account <project-account> #SBATCH -L lsdyna@osc:28 # The following lines set up the LSDYNA environment module load ls-dyna/971_d_9.0.1 # # Run LSDYNA (number of cpus > 1) # lsdyna I=explorer.k NCPU=28
3) submit the script to the batch queue with the command: sbatch job.txt
.
When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR
). Alternatively, you can submit your job from the temporary directory ($TMPDIR
), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR
is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script.
1) copy your input files (explorer.k
in the example below) to your work directory at OSC
2) create a batch script, similar to the following file, saved as job.txt
). It uses the mmp solver for a parallel job (nodes>1) on Owens:
#!/bin/bash #SBATCH --job-name=plate_test #SBATCH --time=5:00:00 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account <project-account> #SBATCH -L lsdyna@osc:56 # The following lines set up the LSDYNA environment module load intel/18.0.3 module load intelmpi/2018.3 module load mpp-dyna/971_d_9.0.1 # # Run LSDYNA (number of cpus > 1) # srun mpp971 I=explorer.k NCPU=56
3) submit the script to the batch queue with the command: sbatch job.txt
.
When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR
). Alternatively, you can submit your job from the temporary directory ($TMPDIR
), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR
is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. An example scrip should include the following lines:
... cd $TMPDIR sbcast $SLURM_SUBMIT_DIR/explorer.k explorer ... #launch the solver and execute sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR} #or you may specify a directory for your output files, such as #sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/output
LS-OPT is a package for design optimization, system identification, and probabilistic analysis with an interface to LS-DYNA.
The following versions of ls-opt
are available on OSC clusters:
Version | Cardinal |
---|---|
6.0.0 | X* |
You can use module spider ls-opt
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
In order to use LS-OPT, you need LS-DYNA. ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.
LSTC, Commercial
module load ls-opt
. The default version will be loaded. To select a particular LS-OPT version, use module load ls-opt/version
. For example, use module load ls-opt/6.0.0
to load LS-OPT 6.0.0.LS-PrePost is an advanced pre and post-processor that is delivered free with LS-DYNA.
The following versions of ls-prepost
are available on OSC clusters:
Version | Cardinal |
---|---|
4.6 | X* |
You can use module spider ls-prepost
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
In order to use LS-PrePost you need LS-DYNA. ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.
LSTC, Commercial
module load ls-prepost
. The default version will be loaded. To select a particular LS-PrePost version, use module load ls-prepost/<version>
. For example, use module load ls-prepost/4.6
to load LS-PrePost 4.6.This page describes how to specify user defined material to use within LS-DYNA. The user-defined subroutines in LS-DYNA allow the program to be customized for particular applications. In order to define user material, LS-DYNA must be recompiled.
The first step to running a simulation with user defined material is to build a new executable. The following is an example done with solver version mpp971_s_R7.1.1.
When you log into the Oakley system, load mpp971_s_R7.1.1 with the command:
module load mpp-dyna/R7.1.1
Next, copy the mpp971_s_R7.1.1 object files and Makefile to your current directory:
cp /usr/local/lstc/mpp-dyna/R7.1.1/usermat/* $PWD
Next, update the dyn21.f file with your user defined material model subroutine. Please see the LS-DYNA User's Manual (Keyword version) for details regarding the format and structure of this file.
Once your user defined model is setup correctly in dyn21.f, build the new mpp971 executable with the command:
make
To execute a multi processor (ppn > 1) run with your new executable, execute the following steps:
1) move your input file to a directory on an OSC system (pipe.k in the example below)
2) copy your newly created mpp971 executable to this directory as well
3) create a batch script (lstc_umat.job) like the following:
#PBS -N LSDYNA_umat #PBS -l walltime=1:00:00 #PBS -l nodes=2:ppn=8 #PBS -j oe #PBS -S /bin/csh # This is the template batch script for running a pre-compiled # MPP 971 v7600 LS-DYNA. # Total number of processors is ( nodes x ppn ) # # The following lines set up the LSDYNA environment module load mpp-dyna/R7.1.1 # # Move to the directory where the job was submitted from # (i.e. PBS_O_WORKDIR = directory where you typed qsub) # cd $PBS_O_WORKDIR # # Run LSDYNA # NOTE: you have to put in your input file name # mpiexec mpp971 I=pipe.k NCPU=16
4) Next, submit this job to the batch queue with the command:
qsub lstc_umat.job
The output result files will be saved to the directory you ran the qsub command from (known as the $PBS_O_WORKDIR_
On-line documentation is available on LSTC website.