ANSYS offers a comprehensive software suite that spans the entire range of physics, providing access to virtually any field of engineering simulation that a design process requires. Supports are provided by ANSYS, Inc.
Version | Cardinal |
---|---|
2024R1 | X |
OSC has Academic Multiphysics Campus Solution license from Ansys. The license includes most of all the features that Ansys provides. See "Academic Multiphysics Campus Solution Products" in this table for all available products at OSC.
OSC has an "Academic Research " license for ANSYS. This allows for academic use of the software by Ohio faculty and students, with some restrictions. To view current ANSYS node restrictions, please see ANSYS's Terms of Use.
Use of ANSYS products at OSC for academic purposes requires validation. Please contact OSC Help for further instruction.
Contact OSC Help for getting access to ANSYS if you are a commercial user.
Ansys, Inc., Commercial
For more information on how to use each ANSYS product at OSC systems, refer to its documentation page provided at the end of this page.
Due to the way our Fluent and ANSYS modules are configured, simultaneously loading multiple of either module will cause a cryptic error. The most common case of this happening is when multiple of a user's jobs are started at the same time and all load the module at once. In order for this error to manifest, the modules have to be loaded at precisely the same time; a rare occurrence, but a probable occurrence over the long term.
If you encounter this error you are not at fault. Please resubmit the failed job(s).
If you frequently submit large amounts of Fluent or ANSYS jobs, we recommend you stagger your job submit times to lower the chances of two jobs starting at the same time, and hence loading the module at the same time. Another solution is to establish job dependencies between jobs, so jobs will only start one after another. To do this, you would add the SLURM directive:
#SBATCH --dependency=after:jobid
To jobs you want to only start after another job has started. You would replace jobid with the job ID of the job to wait for. If you have additional questions, please contact OSC Help.
If you run into this error:
OMP: Error #100: Fatal system error detected. OMP: System error #22: Invalid argument forrtl: error (76): Abort trap signal
Try setting the environment variable KMP_AFFINITY=disabled
before running Ansys.
ANSYS Mechanical is a finite element analysis (FEA) tool that enables you to analyze complex product architectures and solve difficult mechanical problems. You can use ANSYS Mechanical to simulate real world behavior of components and sub-systems, and customize it to test design variations quickly and accurately.
ANSYS Mechanical is available on the Cardinal Cluster. You can see the currently available versions in the table on the main Ansys page here.
You can use module spider ansys
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Use of ANSYS for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Contact OSC Help for getting access to ANSYS if you are a commercial user.
module load ansys
. To select a particular software version, use module load ansys/version
. For example, use module load ansys/17.2
to load ANSYS version 17.2. Following a successful loading of the ANSYS module, you can access the ANSYS Mechanical commands and utility programs located in your execution path:
ansys <switch options> <file>
The ANSYS Mechanical command takes a number of Unix-style switches and parameters.
The -j Switch
The command accepts a -j switch. It specifies the "job id," which determines the naming of output files. The default is the name of the input file.
The -d Switch
The command accepts a -d switch. It specifies the device type. The value can be X11, x11, X11C, x11c, or 3D.
The -m Switch
The command accepts a -m switch. It specifies the amount of working storage obtained from the system. The units are megawords.
The memory requirement for the entire execution will be approximately 5300000 words more than the -m specification. This is calculated for you if you use ansnqs to construct an NQS request.
The -b [nolist] Switch
The command accepts a -b switch. It specifies that no user input is expected (batch execution).
The -s [noread] Switch
The command accepts a -s switch. By default, the start-up file is read during an interactive session and not read during batch execution. These defaults may be changed with the -s command line argument. The noread option of the -s argument specifies that the start-up file is not to be read, even during an interactive session. Conversely, the -s argument with the -b batch argument forces the reading of the start-up file during batch execution.
The -g [off] Switch
The command accepts a -g switch. It specifies that the ANSYS graphical user interface started automatically.
ANSYS Mechanical parameters
ANSYS Mechanical parameters may be assigned values on the command. The parameter must be at least two characters long and must be a legal parameter name. The ANSYS Mechanical parameter that is to be assigned a value should be given on the command line with a preceding dash (-), a space immediately after, and the value immediately after the space:
module load ansys ansys -pval1 -10.2 -EEE .1e6 sets pval1 to -10.2 and EEE to 100000
When you log into cardinal.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your ANSYS Mechanical analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive mode is similar to running ANSYS Mechanical on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run ANSYS Mechanical interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.
To run interactive ANSYS Mechanical, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. For example, the following command requests one core ( -N 1 -n 1
), for a walltime of 1 hour ( -t 1:00:00
), with one ANSYS license:
sinteractive -N 1 -n 1 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24 -A <account>
You may adjust the numbers per your need. This job will queue until resources becomes available. Once the job is started, you're automatically logged in on the compute node; and you can launch ANSYS Mechanical and start the graphic interface with the following commands:
module load ansys ansys -g
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. For a given model, prepare the input file with ANSYS Mechanical commands (named ansys.in
for example) for the batch run. Below is the example batch script ( job.txt
) for a serial run:
#!/bin/bash #SBATCH --job-name=ansys_test #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=1 #SBATCH -L ansys@osc:1 #SBATCH --account=<account> cd $TMPDIR cp $SLURM_SUBMIT_DIR/ansys.in . module load ansys ansys < ansys.in cp <output files> $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the command: qsub job.txt
.
To take advantage of the powerful compute resources at OSC, you may choose to run distributed ANSYS Mechanical for large problems. Multiple nodes and cores can be requested to accelerate the solution time. Note that you'll need to change your batch script slightly for distributed runs.
For distributed ANSYS Mechanical jobs, the number of processors needs to be specified in the command line with options '-dis -np':
#!/bin/bash #SBATCH --job-name=ansys_test #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=28 #SBATCH --account=<account> #SBATCH -L ansys@osc:1,ansyspar@osc:24 ... ansys -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i ansys.in ...
Notice that in the script above, the ansys parallel license is requested as well as ansys license in the format of
#SBATCH -L ansys@osc:1,ansyspar@osc:n
where n=m-4, with m being the total cpus called for this job. This part is necessary when the total cpus called is greater than 4 (m>4), which applies for the parallel example below as well.
The following shows changes in the batch script if 2 nodes on Cardinal are requested for a parallel ANSYS Mechanical job:
#!/bin/bash #SBATCH --job-name=ansys_test #SBATCH --time=3:00:00 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH -L ansys@osc:1,ansyspar@osc:52 ... ansys -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i ansys.in ... pbsdcp -g '<output files>' $SLURM_SUBMIT_DIR
The 'pbsdcp -g' command in the last line in the script above makes sure that all result files generated by different compute nodes are copied back to the work directory.
ANSYS CFX (called CFX hereafter) is a computational fluid dynamics (CFD) program for modeling fluid flow and heat transfer in a variety of applications.
CFX is available on the Cardinal Cluster. You can see the currently available versions in the table on the main Ansys page here.
You can use module spider ansys
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Currently, there are in total 50 ANSYS base license tokens and 900 HPC tokens for academic users. These base tokens and HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow CFX to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial CFX job with 1 core will need 1 base license token while a parallel CFX job with 28 cores will need 1 base license token and 24 HPC tokens.
Contact OSC Help for getting access to CFX if you are a commercial user.
module load ansys
. To select a particular software version, use module load ansys/version
. For example, use module load ansys/17.2
to load CFX version 17.2 on Cardinal. When you log into cardinal.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive mode is similar to running CFX on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run CFX interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.
To run interactive CFX GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please follwoing the steps below to use CFX GUI interactivly:
-N 1 -n 1
), for a walltime of one hour ( -t 1:00:00
), with one ANSYS CFD license (modify as per your own needs):
sinteractive -N 1 -n 1 -t 1:00:00 -L ansys@osc:1
Once the interactive job has started, run the following commands to setup and start the CFX GUI:
module load ansys cfx5
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.
Below is the example batch script ( job.txt
) for a serial run with an input file test.def
) :
#!/bin/bash #SBATCH --job-name=serialjob_cfx #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=1 #SBATCH -L ansys@osc:1 #Set up CFX environment. module load ansys #Copy CFX files like .def to $TMPDIR and move there to execute the program cp test.def $TMPDIR/ cd $TMPDIR #Run CFX in serial with test.def as input file cfx5solve -batch -def test.def #Finally, copy files back to your home directory cp * $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the command: sbatch job.txt
CFX can be run in parallel, but it is very important that you read the documentation in the CFX Manual on the details of how this works.
In addition to requesting the base license token ( -L ansys@osc:1
), you need to request copies of the ansyspar license, i.e., HPC tokens ( -L ansys@osc:1,ansyspar@osc:[n]
), where [n] is equal to the number of cores you requested minus 4.
Parallel jobs have to be submitted on Cardinal via the batch system. An example of the batch script follows:
#!/bin/bash #SBATCH --job-name=paralleljob_cfx #SBATCH --time=10:00:00 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH -L ansys@osc:1,ansyspar@osc:52 #Set up CFX environment. module load ansys #Copy CFX files like .def to $TMPDIR and move there to execute the program cp test.def $TMPDIR/ cd $TMPDIR #Convert the node information into format for CFX host list nodes=$(srun hostname | sort | \ uniq -c | \ awk '{print $2 "*" $1}' | \ paste -sd, -) #Run CFX in parallel with new.def as input file #if multiple nodes cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Distributed Parallel" #if one node #cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Local Parallel" #Finally, copy files back to your home directory cp * $SLURM_SUBMIT_DIR
ANSYS FLUENT (called FLUENT hereafter) is a state-of-the-art computer program for modeling fluid flow and heat transfer in complex geometries.
FLUENT is available on the Cardinal Cluster. You can see the currently available versions in the table on the main Ansys page here.
You can use module spider ansys
for Cardinal to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Currently, there are in total 50 ANSYS base license tokens and 900 HPC tokens for academic users. These base tokens and HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow FLUENT to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial FLUENT job with 1 core will need 1 base license token while a parallel FLUENT job with 28 cores will need 1 base license token and 24 HPC tokens.
Contact OSC Help for getting access to FLUENT if you are a commercial user.
module load ansys
. To select a particular software version, use module load ansys/version
. For example, use module load ansys/17.2
to load FLUENT version 17.2 on Cardinal. When you log into cardinal.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your FLUENT analysis to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive mode is similar to running FLUENT on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run FLUENT interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in non-interactive batch mode.
To run interactive FLUENT GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please following the steps below to use FLUENT GUI interactively:
-N 1 -n 28
), for a walltime of one hour (-t 1:00:00
), with one FLUENT license (modify as per your own needs):
sinteractive -N 1 -n 28 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24
Once the interactive job has started, run the following commands to setup and start the FLUENT GUI:
module load ansys fluent
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.
Below is the example batch script ( job.txt
) for a serial run with an input file (run.input
) on Cardinal:
#!/bin/bash #SBATCH --job-name=serial_fluent #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=1 #SBATCH -L ansys@osc:1 # # The following lines set up the FLUENT environment # module load ansys # # Copy files to $TMPDIR and move there to execute the program # cp test_input_file.cas test_input_file.dat run.input $TMPDIR cd $TMPDIR # # Run fluent fluent 3d -g < run.input # # Where the file 'run.input' contains the commands you would normally # type in at the Fluent command prompt. # Finally, copy files back to your home directory cp * $SLURM_SUBMIT_DIR
As an example, your run.input file might contain:
file/read-case-data test_input_file.cas solve/iterate 100 file/write-case-data test_result.cas file/confirm-overwrite yes exit yes
In order to run it via the batch system, submit the job.txt
file with the command: sbatch job.txt
FLUENT can be run in parallel, but it is very important that you read the documentation in the FLUENT Manual on the details of how this works.
In addition to requesting the FLUENT base license token (-L ansys@osc:1
), you need to request copies of the ansyspar license, i.e., HPC tokens (-L ansys@osc:1,ansyspar@osc:[n]
), where [n] is equal to the number of cores you requested minus 4.
Parallel jobs have to be submitted to Cardinal via the batch system. An example of the batch script follows:
#!/bin/bash #SBATCH --job-name=parallel_fluent #SBATCH --time=3:00:00 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH -L ansys@osc:1,ansyspar@osc:52 set echo on hostname # # The following lines set up the FLUENT environment # module load ansys # # Create the config file for socket communication library # # Create list of nodes to launch job on rm -f pnodes cat $PBS_NODEFILE | sort > pnodes export ncpus=`cat pnodes | wc -l` # # Run fluent fluent 3d -t$ncpus -pinfiniband.ofed -cnf=pnodes -g < run.input
FLUENT parallel jobs with default MPI (Intel MPI) may experience startup failures, leading to job hang due to a recent Slurm upgrade. Intel MPI in FLUENT uses SSH as the default bootstrap mechanism to launch the Hydra process manager. Starting with Slurm version 23.11, the environment variable I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=--external-launcher
is added because Slurm is set as the default bootstrap system (I_MPI_HYDRA_BOOTSTRAP=slurm
). However, this causes an issue when SSH is utilized as the bootstrap system.
Prepend export -n I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
to the fluent
command-line.
ANSYS Workbench platform is the backbone for delivering a comprehensive and integrated simulation system to users. See ANSYS Workbench platform for more information.
ANSYS Workbench is available on Cardinal Cluster. You can see the currently available versions in the table on the main Ansys page here.
You can use module spider ansys
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Contact OSC Help for getting access to ANSYS if you are a commercial user.
To load the default version , use module load ansys
. To select a particular software version, use module load ansys/version
. For example, use module load ansys/17.2
to load version 17.2 on Cardinal. After the module is loaded, use the following command to open Workbench GUI:
runwb2
To load the default version , use module load ansys
. To select a particular software version, use module load ansys/version
. For example, use module load ansys/17.2
to load version 17.2 on Cardinal. After the module is loaded, use the following command to open Workbench GUI:
runwb2