DOGWOOD MPI MODULES

Why MPI modules?

To use MPI code on the Dogwood cluster you will need to add one of the MPI modules. You can do module save to save it so you won’t have to repeat this step. This document covers the various options available.

MPI Variations

There are two implementations of MPI that are used on the cluster and which run over the InfiniBand fabric, namely OpenMPI and MVAPICH2. In addition there are three different compilers available on the cluster, namely those from Intel, GNU, and PGI. Thus there are many possible combinations of MPI implementation and compiler. In addition there are multiple versions of each of these. Note we do not have modules built for every possible combination. To see what is available, you can use the module avail command to see all modules, are you can supply a string to have it search for that string. For example module avail openmpi will return all modules that contain openmpi in their name.

Performance

Overall the various MPI implementations will deliver similar performance although they won’t be identical for every operation and message size and thus you may see better performance with a particular module for your application and problem size.

Naming Convention

The naming convention is<MPI>_<version>/<Compiler>_<version>. So, for example, module openmpi_3.0.0/intel_17.2 is OpenMPI 3.0.0 built with the Intel compilers version 17.2.

Caveat – Deprecated modules:

This naming convention replaces the previous naming system. The old modules have not gone away yet but they are deprecated. We created copies of these using the new naming system and users are encourage to start using the new names.

Old Module Name New Module Name
openmpi_gcc/4.8.5 openmpi_2.0.3/gcc_4.8.5
openmpi_gcc/6.3.0 openmpi_2.0.3/gcc_6.3.0
openmpi_intel/17.2 openmpi_2.0.3/intel_17.2
mvapich2_gcc/4.8.5 mvapich2_2.3a/gcc_4.8.5
mvapich2_intel/17.2 mvapich2_2.3a/intel_17.2

 

Last Update 3/29/2024 1:23:42 AM