From WARP3D's webpage:
WARP3D is under continuing development as a research code for the solution of large-scale, 3-Dsolid models subjected to static and dynamic loads. The capabilities of the code focus on fatigue & fracture analyses primarily in metals. WARP3D runs on laptops-to-supercomputers and can analyze models with several million nodes and elements.
Availability and Restrictions
Versions
The following versions of WARP3D are available on OSC clusters:
Version | Owens | Pitzer |
---|---|---|
17.7.1 | X | |
17.7.4 | X | |
17.8.0 | X | |
17.8.7 | X | X |
You can use module spider warp3d
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access
WARP3D is available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
University of Illinois at Urbana-Champaign, Open source
Usage
Usage on Owens
Setup on Owens
To configure the Owens cluster for the use of WARP3D, use the following commands:
module load intel module load intelmpi module load warp3d
Batch Usage on Owens
Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Oakley, Queues and Reservations for Ruby, and Scheduling Policies and Limits for more info.
Running WARP3D
Below is an example batch script (job.txt
) for using WARP3D:
#!/bin/bash #SBATCH --job-name WARP3D #SBATCH --nodes=1 --ntasks-per-node=28 #SBATCH --time=30:00 #SBATCH --account <project-account> # Load the modules for WARP3D module load intel/18.0.3 module load intelmpi/2018.0 module load warp3d # Copy files to $TMPDIR and move there to execute the program cp $WARP3D_HOME/example_problems_for_READMEs/mt_cohes_*.inp $TMPDIR cd $TMPDIR # Run the solver using 4 MPI tasks and 6 threads per MPI task $WARP3D_HOME/warp3d_script_linux_hybrid 4 6 < mt_cohes_4_cpu.inp # Finally, copy files back to your home directory cp -r * $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the following command:
sbatch job.txt
Usage on Pitzer
Setup on Pitzer
To configure the Owens cluster for the use of WARP3D, use the following commands:
module load intel module load intelmpi module load warp3d
Batch Usage on Pitzer
Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Oakley, Queues and Reservations for Ruby, and Scheduling Policies and Limits for more info.
Running WARP3D
Below is an example batch script (job.txt
) for using WARP3D:
#!/bin/bash #SBATCH --job-name WARP3D #SBATCH --nodes=1 --ntasks-per-node=40 #SBATCH --time=30:00 #SBATCH --account <project-account> # Load the modules for WARP3D module load intel module load intelmpi module load warp3d # Copy files to $TMPDIR and move there to execute the program cp $WARP3D_HOME/example_problems_for_READMEs/mt_cohes_*.inp $TMPDIR cd $TMPDIR # Run the solver using 4 MPI tasks and 6 threads per MPI task $WARP3D_HOME/warp3d_script_linux_hybrid 4 6 < mt_cohes_4_cpu.inp # Finally, copy files back to your home directory cp -r * $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the following command:
sbatch job.txt