MPI Library

MPI is a standard library for performing parallel processing using a distributed-memory model. The Glenn and Oakley clusters at OSC use the MVAPICH implementation of the Message Passing Interface (MPI), which is based on MPICH and optimized for the high-speed Infiniband interconnects.

MVAPICH2, based on the MPI-2 standard, is installed on both Oakley and Glenn. The older MVAPICH, based on the MPI-1 standard, is installed on Glenn only.

Availability & Restrictions

MPI is available without restriction to all OSC users.

Installations are available for the Intel, PGI, and gnu compilers.

The following versions of MVAPICH2 are available on OSC systems:

Version Glenn Oakley
1.5 X*  
1.6 X  
1.7   X*
1.8   X
1.9   X

The following versions of MVAPICH (MPI-1) are available on OSC systems:

Version Glenn Oakley
1.1 X*  

Some older versions are also available.

*Default version. The default on Oakley is the build corresponding to the currently loaded compiler. The default on Glenn is the MPI-1 build for the PGI compiler.

Usage

Set-up

To set up your environment for using the MPI libraries, you must load the appropriate module. On Oakley this is pretty straightforward:

module load mvapich2

You will get the default version for the compiler you have loaded. (Be sure to swap the intel compiler module for the gnu module if you're using gnu.)

On Glenn you should load the module shown in the table below for the compiler you're using. Be sure to unload the default module before loading a new one.

  PGI Intel GNU
MPI-1 mpi mvapich-1.1-fixes-intel mvapich-1.1-fixes-gnu
MPI-2 mpi2 mvapich2-1.5-intel mvapich2-1.5-gnu

To see all available mvapich modules on Glenn, use this command: module avail mvapich

For example, to use the MPI-2 library with the gnu compilers:

module unload mpi
module load mvapich2-1.5-gnu

Building With MPI

To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table.

C mpicc
C++ mpicxx
FORTRAN 77 mpif77
Fortran 90 mpif90

For example, to build the code my_prog.c using the -O2 option, you would use:

mpicc -o my_prog -O2 my_prog.c

In rare cases you may be unable to use the wrappers. In that case you should use the environment variables set by the module.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to the MPI libraries.

For example, to build the code my_prog.c without using the wrappers you would use:

mpicc -c $MPI_CFLAGS my_prog.c
mpicc -o my_prog my_prog.o $MPI_LIBS

Batch Usage

Programs built with MPI can only be run in the batch environment at OSC. For information on starting MPI programs using the mpiexec command, see Batch Processing at OSC.

Be sure to load the same compiler and mvapich modules at execution time as at build time.

Further Reading

See Also

Supercomputer: 
Service: 
Fields of Science: