Intel's implementation of the Message Passing Interface (MPI) library. See Intel Compilers for available compiler versions at OSC.
Intel MPI may be used as an alternative to - but not in conjunction with - the MVAPICH2 MPI libraries. The versions currently available at OSC are:
Version | Pitzer | Ascend | Cardinal |
---|---|---|---|
2017.4 | X | ||
2018.3 | X | ||
2018.4 | X | ||
2019.3 | X | ||
2019.7 | X* | ||
2021.3 | X | ||
2021.4.0 | X* | ||
2021.5 | X | ||
2021.10.0 | X | ||
2021.10 | X | X | |
2021.11 | X | X |
You can use module spider intelmpi
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Intel MPI is available to all OSC users. If you have any questions, please contact OSC Help.
Intel, Commercial
module load intelmpi
.
module load intelmpi
.Software compiled against this module will use the libraries at runtime.
On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.
VARIABLE | USE |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
$MPI_LIBS |
Use when linking your program to Intel MPI. |
In general, for any application already set up to use mpicc
compilation should be fairly straightforward.
When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
my-impi-application
) for five hours on Pitzer:
#!/bin/bash #SBATCH --job-name MyIntelMPIJob #SBATCH --nodes=2 --ntasks-per-node=48 #SBATCH --time=5:00:00 #SBATCH --account=<project-account> module load intelmpi srun my-impi-application
module spider intelmpi
to check what module(s) to load first. Use module load [module name and version]
to load what modules you need, then use module load intelmpi
to load the default intelmpi.
module load intelmpi
.Software compiled against this module will use the libraries at runtime.
On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.
VARIABLE | USE |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
$MPI_LIBS |
Use when linking your program to Intel MPI. |
In general, for any application already set up to use mpicc
compilation should be fairly straightforward.
When you log into ascend.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
my-impi-application
) for five hours on Ascend:
#!/bin/bash #SBATCH --job-name MyIntelMPIJob #SBATCH --nodes=2 --ntasks-per-node=48 #SBATCH --time=5:00:00 #SBATCH --account=<project-account> module load intelmpi srun my-impi-application
A partial-node MPI job may fail to start using mpiexec
from intelmpi/2019.3
and intelmpi/2019.7
with error messages like
[mpiexec@o0439.ten.osc.edu] wait_proxies_to_terminate (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:532): downstream from host o0439 was killed by signal 11 (Segmentation fault) [mpiexec@o0439.ten.osc.edu] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:2114): assert (exitcodes != NULL) failed
/var/spool/torque/mom_priv/jobs/11510761.pitzer-batch.ten.osc.edu.SC: line 30: 11728 Segmentation fault
/var/spool/slurmd/job00884/slurm_script: line 24: 3180 Segmentation fault (core dumped)
If you are using Slurm, make sure the job has CPU resource allocation using #SBATCH --ntasks=N
instead of
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=N
If you are using PBS, please use Intel MPI 2018 or intelmpi/2019.3
with the module libfabric/1.8.1
.
Intel MPI on Slurm batch system is configured to support PMI process manager. It is recommended to use srun
as MPI program launcher. If you prefer using mpiexec
/mpirun
over Hydra process manager with Slurm, please add following code to the batch script before running any MPI executable:
unset I_MPI_PMI_LIBRARY I_MPI_HYDRA_BOOTSTRAP export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=0 # the option -ppn only works if you set this before
intelmpi/2019.3
may crash, fail or proceed with errors on the home directory. We do not expect the same issue on our GPFS file system, such as the project space and the scratch space. The problem might be related to the known issue reported by HDF5 group. Please read the section "Problem Reading A Collectively Written Dataset in Parallel" from HDF5 Known Issues for more detail.