×

We are preparing our new cluster, Cardinal, and updating our software pages as we progress. Some software has been installed on Cardinal, but access to these programs is restricted unless you have access to the cluster. Therefore, even if you see some software listed under Cardinal, you will not be able to access it until we open the cluster to the public.

MVAPICH

MVAPICH is a standard library for performing parallel processing using a distributed-memory model. 

Availability and Restrictions

Versions

The following versions of MVAPICH are available on OSC systems:

Version Cardinal Pitzer Ascend
3.0 X   X
* Current default version

You can use module spider mvapich/ to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

MPI is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

NBCL, The Ohio State University/ Open source 

Usage

Set-up

To set up your environment for using the MPI libraries, you must load the appropriate module:

module load mvapich

You will get the default version for the compiler you have loaded.

Note:Be sure to swap the intel compiler module for the gnu module if you're using gnu.

Building With MPI

To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table.

Compiler Command
C mpicc
C++ mpicxx
FORTRAN 77 mpif77
Fortran 90 mpif90

For example, to build the code my_prog.c using the -O2 option, you would use:

mpicc -o my_prog -O2 my_prog.c

In rare cases you may be unable to use the wrappers. In that case you should use the environment variables set by the module.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to the MPI libraries.

For example, to build the code my_prog.c without using the wrappers you would use:

mpicc -c $MPI_CFLAGS my_prog.c

mpicc -o my_prog my_prog.o $MPI_LIBS

Batch Usage

Programs built with MPI can only be run in the batch environment at OSC. For information on starting MPI programs using the srun command, see Batch Processing at OSC.

Be sure to load the same compiler and mvapich modules at execution time as at build time.

Known Issues

HWloc warning: Failed with: intersection without inclusion

Updated: August 2024
Versions Affected: mvapich/3.0 and above
When running MPI+OpenMP hybrid code with the Intel Classic Compiler and MVAPICH 3.0, you may encounter the following warning message from hwloc:
****************************************************************************
* hwloc 2.7.1rc1-git received invalid information from the operating system.
*
* Failed with: intersection without inclusion
* while inserting Group0 (cpuset 0x00000001,0x01010101,0x01010101,0x01010101) at Package (P#0 cpuset 0x55555555,0x55555555,0x55555555)
* coming from: linux:sysfs:numa
*
* The following FAQ entry in the hwloc documentation may help:
*   What should I do when hwloc reports "operating system" warnings?
* Otherwise please report this error message to the hwloc user's mailing list,
* along with the files generated by the hwloc-gather-topology script.
*
* hwloc will now ignore this invalid topology information and continue.
****************************************************************************

This is caused by the fact that the version of hwloc in MVAPICH 3.0 is 2.4.1, which does not match the hwloc 2.7.1rc1-git used in the Intel OpenMP library (iomp5). We have tested and observed no performance issues on a few applications; however, it may impact your case due to potential locality issues. If you encounter any problems, please consider using MVAPICH 3.0 with other compilers such as oneAPI or GCC.

Further Reading

See Also

Supercomputer: 
Service: 
Technologies: 
Fields of Science: