MVAPICH2 is a standard library for performing parallel processing using a distributed-memory model.
Availability and Restrictions
Versions
The following versions of MVAPICH2 are available on OSC systems:
Version | Owens | Pitzer | Ascend | Cardinal |
---|---|---|---|---|
2.3 | X | X | ||
2.3.1 | X | X | ||
2.3.2 | X | X | ||
2.3.3 | X* | X* | ||
2.3.5 | X | X | ||
2.3.6 | X | X | X | |
2.3.7 | X* | |||
2.3.7-1 | X |
You can use module spider mvapich2
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access
MPI is available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
NBCL, The Ohio State University/ Open source
Usage
Set-up
To set up your environment for using the MPI libraries, you must load the appropriate module:
module load mvapich2
You will get the default version for the compiler you have loaded.
Building With MPI
To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table.
C | mpicc |
C++ | mpicxx |
FORTRAN 77 | mpif77 |
Fortran 90 | mpif90 |
For example, to build the code my_prog.c using the -O2 option, you would use:
mpicc -o my_prog -O2 my_prog.c
In rare cases you may be unable to use the wrappers. In that case you should use the environment variables set by the module.
Variable | Use |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran 77 programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
$MPI_LIBS |
Use when linking your program to the MPI libraries. |
For example, to build the code my_prog.c without using the wrappers you would use:
mpicc -c $MPI_CFLAGS my_prog.c
mpicc -o my_prog my_prog.o $MPI_LIBS
Batch Usage
Programs built with MPI can only be run in the batch environment at OSC. For information on starting MPI programs using the srun
or mpiexec
command, see Batch Processing at OSC.
Be sure to load the same compiler and mvapich modules at execution time as at build time.
Known Issues
Large MPI job startup failure
Versions Affected: Mvapich2/2.3 & 2.3.1
Further Reading
- The Message Passing Interface (MPI) Standard
- MPI Training Course