FLUENT

ANSYS FLUENT (called FLUENT hereafter) is a state-of-the-art computer program for modeling fluid flow and heat transfer in complex geometries.

Availability and Restrictions

FLUENT is available on the Cardinal Cluster. You can see the currently available versions in the table on the main Ansys page here.

You can use module spider ansys for Cardinal to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Currently, there are in total 50 ANSYS base license tokens and 900 HPC tokens for academic users. These base tokens and HPC tokens are shared with all ANSYS products we have at OSC.  A base license token will allow FLUENT to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial FLUENT job with 1 core will need 1 base license token while a parallel FLUENT job with 28 cores will need 1 base license token and 24 HPC tokens.

Access for Commercial Users

Contact OSC Help for getting access to FLUENT if you are a commercial user.

Usage

Usage on Cardinal

Set-up on Cardinal

To load the default version of FLUENT module, use  module load ansys. To select a particular software version, use module load ansys/version. For example, use  module load ansys/17.2  to load FLUENT version 17.2 on Cardinal. 

Batch Usage on Cardinal

When you log into cardinal.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your FLUENT analysis to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running FLUENT on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run FLUENT interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in non-interactive batch mode.

To run interactive FLUENT GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please following the steps below to use FLUENT GUI interactively:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to Cardinal system
  3. Request an interactive job. The command below will request one whole node with 28 cores ( -N 1 -n 28), for a walltime of one hour (-t 1:00:00), with one FLUENT license (modify as per your own needs):
    sinteractive -N 1 -n 28 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24
  4. Once the interactive job has started, run the following commands to setup and start the FLUENT GUI:

    module load ansys
    fluent 
    
Non-interactive Batch Job (Serial Run Using 1 Base Token)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script ( job.txt) for a serial run with an input file (run.input) on Cardinal:

#!/bin/bash
#SBATCH --job-name=serial_fluent
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L ansys@osc:1
#
# The following lines set up the FLUENT environment
#
module load ansys
#
# Copy files to $TMPDIR and move there to execute the program
#
cp test_input_file.cas test_input_file.dat run.input $TMPDIR
cd $TMPDIR
#
# Run fluent
fluent 3d -g < run.input  
#
# Where the file 'run.input' contains the commands you would normally
# type in at the Fluent command prompt.
# Finally, copy files back to your home directory
cp *   $SLURM_SUBMIT_DIR 

As an example, your run.input file might contain:

file/read-case-data test_input_file.cas 
solve/iterate 100
file/write-case-data test_result.cas
file/confirm-overwrite yes    
exit  
yes  

In order to run it via the batch system, submit the job.txt file with the command: sbatch job.txt 

Non-interactive Batch Job (Parallel Execution using HPC token)

FLUENT can be run in parallel, but it is very important that you read the documentation in the FLUENT Manual on the details of how this works.

In addition to requesting the FLUENT base license token (-L ansys@osc:1), you need to request copies of the ansyspar license, i.e., HPC tokens (-L ansys@osc:1,ansyspar@osc:[n]), where [n] is equal to the number of cores you requested minus 4.

Parallel jobs have to be submitted to Cardinal via the batch system. An example of the batch script follows:

#!/bin/bash
#SBATCH --job-name=parallel_fluent
#SBATCH --time=3:00:00
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH -L ansys@osc:1,ansyspar@osc:52
set echo on   
hostname   
#   
# The following lines set up the FLUENT environment   
#   
module load ansys
#      
# Create the config file for socket communication library   
#   
# Create list of nodes to launch job on   
rm -f pnodes   
cat  $PBS_NODEFILE | sort > pnodes   
export ncpus=`cat pnodes | wc -l`   
#   
#   Run fluent   
fluent 3d -t$ncpus -pinfiniband.ofed -cnf=pnodes -g < run.input 

Known Issues

Parallel job hang and startup failed

Resolution: Resolved with workaround
Update: April 2024
Version: All

FLUENT parallel jobs with default MPI (Intel MPI) may experience startup failures, leading to job hang due to a recent Slurm upgrade. Intel MPI in FLUENT uses SSH as the default bootstrap mechanism to launch the Hydra process manager. Starting with Slurm version 23.11, the environment variable I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=--external-launcher is added because Slurm is set as the default bootstrap system (I_MPI_HYDRA_BOOTSTRAP=slurm). However, this causes an issue when SSH is utilized as the bootstrap system.

Workaround

Prepend export -n I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS to the fluent command-line.

Reference

Further Reading

See Also

Supercomputer: 
Service: