LS-DYNA is a general purpose finite element code for simulating complex structural problems, specializing in nonlinear, transient dynamic problems using explicit integration. LS-DYNA is one of the codes developed at Livermore Software Technology Corporation (LSTC).
Availability and Restrictions
Versions
LS-DYNA is available on Cardinal Cluster for both serial (smp solver for single node jobs) and parallel (mpp solver for multipe node jobs) versions. The versions currently available at OSC are:
Version | Cardinal | |
---|---|---|
13.1.0 | smp | X |
mpp | X | |
15.0.2 | smp | X |
mpp | X |
You can use module spider ls-dyna
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access for Academic Users
ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.
Access for Commercial Users
Contact OSC Help for getting access to LS-DYNA if you are a commercial user.
Publisher/Vendor/Repository and License Type
LSTC, Commercial
Usage
Usage on Cardinal
Set-up on Cardinal
To view available modules installed on Cardinal, use module spider ls-dyna
for smp solvers, and use module spider mpp
for mpp solvers. In the module name, '_s' indicates single precision and '_d' indicates double precision. For example, mpp-dyna/971_d_9.0.1 is the mpp solver with double precision on Cardinal. Use module load name
to load LS-DYNA with a particular software version. For example, use module load mpp-dyna/971_d_9.0.1
to load LS-DYNA mpp solver version 9.0.1 with double precision on Cardinal.
Batch Usage on Cardinal
When you log into cardinal.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive Batch Session
For an interactive batch session one can run the following command:
sinteractive -A <project-account> -N 1 -n 28 -t 00:20:00 -L lsdyna@osc:28which requests one whole node with 28 cores (
-N 1 -n 28
), for a walltime of 20 minutes (-t 00:20:00
). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Please follow the steps below to use LS-DYNA via the batch system:
1) copy your input files (explorer.k
in the example below) to your work directory at OSC
2) create a batch script, similar to the following file, saved as job.txt
. It uses the smp solver for a serial job (nodes=1) on Owens:
#!/bin/bash #SBATCH --job-name=plate_test #SBATCH --time=5:00:00 #SBATCH --nodes=1 --ntasks-per-node=28 #SBATCH --account <project-account> #SBATCH -L lsdyna@osc:28 # The following lines set up the LSDYNA environment module load ls-dyna/971_d_9.0.1 # # Run LSDYNA (number of cpus > 1) # lsdyna I=explorer.k NCPU=28
3) submit the script to the batch queue with the command: sbatch job.txt
.
When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR
). Alternatively, you can submit your job from the temporary directory ($TMPDIR
), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR
is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script.
Non-interactive Batch Job (Parallel Run)
Please follow the steps below to use LS-DYNA via the batch system:1) copy your input files (explorer.k
in the example below) to your work directory at OSC
2) create a batch script, similar to the following file, saved as job.txt
). It uses the mmp solver for a parallel job (nodes>1) on Owens:
#!/bin/bash #SBATCH --job-name=plate_test #SBATCH --time=5:00:00 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account <project-account> #SBATCH -L lsdyna@osc:56 # The following lines set up the LSDYNA environment module load intel/18.0.3 module load intelmpi/2018.3 module load mpp-dyna/971_d_9.0.1 # # Run LSDYNA (number of cpus > 1) # srun mpp971 I=explorer.k NCPU=56
3) submit the script to the batch queue with the command: sbatch job.txt
.
When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR
). Alternatively, you can submit your job from the temporary directory ($TMPDIR
), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR
is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. An example scrip should include the following lines:
... cd $TMPDIR sbcast $SLURM_SUBMIT_DIR/explorer.k explorer ... #launch the solver and execute sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR} #or you may specify a directory for your output files, such as #sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/output
Further Reading
- The LSDYNA homepage
- The LSDYNA users manual (structural and keyword editions)
- The LSDYNA Theory Manual
- The LS-PREPOST Tutorial