OpenMPI Application
The Message Passing Interface (MPI) is a widely adopted standard in high-performance computing (HPC) applications, enabling communication between compute nodes within a single system or across multiple platforms.
When creating an Apptainer container to build and run MPI applications, it’s important to ensure that the OpenMPI version inside the container is compatible with the version on the host system. Additionally, the container’s OpenMPI configuration must support the same process management mechanism and version (for example, PMI2
or PMIx
) as used on the host.
This page aims to showcase how to build and execute OpenMPI applications within Apptainer containers. There are three different approaches to achieve this, but the most common method for running OpenMPI applications installed in an Apptainer container is to utilize the OpenMPI implementation available on the host system. This approach, known as the Hybrid model, which leverages both the OpenMPI implementations provided by the system administrators on the host and the one included in the container. This tutorial will focus on this model. For more information please refer to the Apptainer documentation.
Use Case (ASPECT)
ASPECT is a software designed to simulate problems in thermal convection, with a primary focus on processes occurring in the Earth’s mantle. However, its design is general enough to accommodate a wide range of applications.
To ensure ASPECT functions correctly, several dependencies must be installed, including BLAS, LAPACK, zlib, and others. ASPECT also requires OpenMPI, along with its headers and the necessary executables to run MPI programs.
In this tutorial, you will learn how to create, build, and run an Apptainer container that installs ASPECT and all its dependencies, including OpenMPI.
Creating Definition File
The first step in building ASPECT is to create a definition file that installs ASPECT along with all its dependencies, including OpenMPI. Create a definition file named aspect_container.def
with the following content:
# Use Docker as the bootstrap method for building the container.
Bootstrap: docker
# Use Ubuntu 22.04 as the base image.
From: ubuntu:22.04
# Define environment variables to configure paths for MPI and ASPECT binaries.
%environment
# Set the library path for OpenMPI.
export LD_LIBRARY_PATH="$OMPI_DIR/lib:$LD_LIBRARY_PATH"
# Set the manual pages path for OpenMPI.
export MANPATH="$OMPI_DIR/share/man:$MANPATH"
# Add ASPECT binaries to the PATH for easy execution.
export PATH="/opt/aspect/bin:$PATH"
# Define the steps to build the container, install dependencies, and configure the environment.
%post
# Update the package repository to fetch the latest package information.
apt update
echo "Installing required packages..."
# Install necessary tools, libraries, and dependencies for building and running scientific applications.
apt install -yq --install-recommends \
build-essential \
ca-certificates \
cmake \
cmake-curses-gui \
file \
g++ \
gcc \
gfortran \
git \
libblas-dev \
libboost-all-dev \
liblapack-dev \
lsb-release \
ninja-build \
numdiff \
openmpi-bin \
openmpi-common \
software-properties-common \
wget \
zlib1g-dev \
astyle
echo "Installing Open MPI"
# Install OpenMPI development libraries and tools.
apt-get install -y openmpi-bin openmpi-common libopenmpi-dev
# Verify OpenMPI installation by checking the version of the MPI compiler.
mpicc --showme:version
# Add the deal.II library repository for finite element computations.
add-apt-repository "ppa:ginggs/deal.ii-9.4.0-backports"
apt update
# Install deal.II runtime and development libraries.
apt install -yq libdeal.ii-9.4.0 libdeal.ii-dev
# Clean up cached package data to reduce the image size.
apt-get clean && rm -r /var/lib/apt/lists/*
echo "Installing ASPECT"
# Define the installation directory for ASPECT.
export Aspect_DIR="/opt/aspect"
# Clone the ASPECT repository with a shallow copy to minimize download size.
git clone --depth 1 --branch v2.4.0 https://github.com/geodynamics/aspect.git "$Aspect_DIR"
# Create a build directory for ASPECT.
mkdir -p "${Aspect_DIR}/build"
cd "${Aspect_DIR}/build"
# Configure ASPECT with CMake for Release mode and install examples.
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX="$Aspect_DIR" -DASPECT_INSTALL_EXAMPLES=ON ..
# Build and install ASPECT using two CPU cores.
make -j2
make install
# Clean up build artifacts to save space.
make clean
The OpenMPI in the container must be compatible with the version of MPI available on the host. |
Building and Consuming the Containers
The next step is to build the container using the following command:
apptainer build aspect_apptainer.sif aspect_apptainer.def
Once the container is built successfully, you should have the SIF file aspect_apptainer.sif
.
You can now run ASPECT using the SIF image. To do this, create a submission script with the following content:
Installing OpenMPI on Discovery
As mentioned before, you need to have a compatible OpenMPI version on the host. In this section, you will be able to find the OpenMPI version installed in the container by running the following command:
$ apptainer exec openmpi_hyprid_from_os_ubuntu.sif mpicc --showme:version
/usr/bin/mpicc: Open MPI 4.1.2 (Language: C)
The output shows that the version is (4.1.2).
You can use Spack to install this version on Discovery through Spack which was installed using SStack. First, load SStack module, then install Spack where SStack will create a module file that you need in environment to install OpenMPI:
module load sstack
sstack install -n spack_mpi -t spack
module load spack/spack_mpi
The following shows how to install major versions OpenMPI.
To install OpenMPI at version follow the following command, you need to update it with the desired version of OpenMPI. For example, to install the version 4.1.2 , run the following command:
spack install openmpi@4.1.2 +pmi fabrics=ucx schedulers=slurm +legacylaunchers
Once this version is installed, Spack creates a module file for it, and to use it in your environment you need to load it.
The variants pmi
enables support for PMI to enable communication interface between MPI processes and job schedulers, and legacylaunchers
ensures that traditional MPI launcher commands like mpirun
and mpiexec
are available
If the version 5.0.0 or more was installed in the container, you need to install it on discovery. For example, to install version 5.0.3, run the following command:
spack install openmpi@5.0.3 fabrics=ucx schedulers=slurm
This command will create a module for this version, you need to load in your environment to be compatible with the one in the container.
Running OpenMPI Container
To run your OpenMPI application (ASPECT) using the container you built in the above sections, you need to create a submission script.
make sure to load the Spack module and then the OpenMPI module in your environment to have a compatible version of OpenMPI. |
#!/bin/bash
## Slurm Directives
#SBATCH --job-name=ASPECT_apptainer
#SBATCH --output=ASPECT_apptainer-%j.out
#SBATCH --ntasks=16
#SBATCH --nodes 2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=5G
#SBATCH -p normal
#SBATCH --time 00:10:00
# load the required modules
module restore
module load spack/spack_mpi
module load openmpi/4.1.2-gspt4wl
# SIF image
SIF_IMAGE=/path/to/aspect_apptainer.sif
# Input file
INPUT=/opt/aspect/cookbooks/convection-box/convection-box.prm
mpirun apptainer exec $SIF_IMAGE aspect $INPUT
The standard way to execute MPI applications with hybrid Apptainer containers is to run the native mpirun command from the host, which will start Apptainer containers and ultimately MPI ranks within the containers.
|
convection-box.prm
is an input file that defines the problem and parameters used to run ASPECT. After submitting the job and completing the run successfully, you can review the output file to examine the results.
If you check the output file, you will find that it contains the following lines which indicate that the job was running successfully using 16 processes as stated in the submission script.
...
...
-----------------------------------------------------------------------------
-- This is ASPECT, the Advanced Solver for Problems in Earth's ConvecTion.
-- . version 2.4.0
-- . using deal.II 9.4.0
-- . with 32 bit indices and vectorization level 0 (64 bits)
-- . using Trilinos 13.2.0
-- . using p4est 2.2.0
-- . running in OPTIMIZED mode
-- . running with 16 MPI processes
-----------------------------------------------------------------------------
...
...