Handling full-node MPI warnings with MVAPICH 3.0
When running a full-node MPI job with MVAPICH 3.0 , you may encounter the following warning message:
When running a full-node MPI job with MVAPICH 3.0 , you may encounter the following warning message:
When running MPI+OpenMP hybrid code with the Intel Classic Compiler and MVAPICH 3.0, you may encounter the following warning message from hwloc:
Users may encounter the following errors when compiling a C++ program with GCC 13:
error: 'uint64_t' in namespace 'std' does not name a type
or
Several applications using OpenMPI, including HDF5, Boost, Rmpi, ORCA, and CP2K, may fail with errors such as
mca_coll_hcoll_module_enable() coll_hcol: mca_coll_hcoll_save_coll_handlers failed
or
Caught signal 11: segmentation fault
We have identified that the issue is related to HCOLL (Hierarchical Collectives) being enabled in OpenMPI.
STAR-CCM+ encounters errors when running MPI jobs with Intel MPI or OpenMPI, displaying the following message:
ib_iface.c:1139 UCX ERROR Invalid active_speed on mlx5_0:1: 128
This issue occurs because the UCX library (v1.8) bundled with STAR-CCM+ only supports Mellanox InfiniBand EDR, while Mellanox InfiniBand NDR is used on Cardinal. As a result, STAR-CCM+ fails to correctly communicate over the newer fabric.
18.18.06.006, 19.04.009 and possibly later versions