OpenFOAM Recipe

OpenFOAM is an extensive, open-source computational fluid dynamics (CFD) toolbox widely used in both academia and industry for the simulation of fluid flow, heat transfer, and associated physical processes. It can be installed on various operating systems, including Linux, using Spack. This tutorial shows you how to install OpenFOAM on discovery through Spack environments.

Installing Spack

The first step in installing OpenFOAM is installing Spack in you home or project directories. For instructions on how to install Spack through SStack refer to Spack tutorial.

You shouldn’t use the login node to install packages in Discovery. Instead you have to use one of the compute nodes. This is available either throw interactive shell or interactive jobs. Also, make sure you request at least 16 cores and 32GB of memory to install OpenFOAM.

The first step to install Spack through SStack is loading SStack module.

module load sstack

Then, install Spack either in your home directory or in a project space. To install it in your home directory, run:

sstack install -t spack -n spack-openfoam

This will create a module that you can load in your environment to be able to use Spack.

module load spack/spack-openfoam

Create a Spack Environment

In Spack, environments are used to manage groups of packages, including their installation and dependencies. An environment can be defined using a spack.yaml file, which specifies the configuration, packages, and their versions, among other settings. This YAML-based approach provides a declarative way to specify what you want your environment to look like, making it easier to share and reproduce environments across different systems. Also, Spack environments isolate package installations from each other, enabling the creation of distinct development environments for different projects or versions of software. This isolation prevents conflicts between package versions and dependencies. For more details, refer to Spack environments

There are different ways you can create a Spack envirounment. For OpenFOAM, use Spack configuration file (spack.yaml).

After you load Spack that you installed previously, create a Spack envirounment recipe in your desired path.

vim spack.yaml

Copy the following content into the recipie:

# This is a Spack Environment file.
#
# It describes a set of packages to be installed, along with
# configuration settings.
spack:
  # 'specs' lists the packages and versions that Spack should install.
  specs:
    - openfoam@2306  # Requests Spack to install OpenFOAM version 2306

  # 'view' specifies whether to create a filesystem view. A 'true' value
  # means Spack will create a directory hierarchy containing symlinks to
  # the installed packages, making it easier to access them.
  view: true

  # 'concretizer' settings control how Spack resolves package dependencies.
  # 'unify: true' tells Spack to try and use the same version of dependencies
  # across different packages when possible, minimizing the number of duplicate
  # installations.
  concretizer:
    unify: true

  # 'packages' allows for detailed configuration of specific packages.
  packages:
    # For any package that depends on 'mpi', require that 'OpenMPI' version 3.1.6 is used.
    mpi:
      require: [openmpi@3.1.6+legacylaunchers] # Do not remove mpirun/mpiexec when building with slurm

    # Specific configuration for 'openmpi'.
    openmpi:
      # When installing 'openmpi', include support for the 'pmi' library,
      # use 'ucx' for communication fabrics, and enable Slurm scheduler support.
      require: +pmi fabrics=ucx schedulers=slurm

    # Configuration for 'slurm'.
    slurm:
      # Specify the version of Slurm to use.
      version: [22.05.11]
      # 'buildable: false' indicates that Spack should not try to build Slurm
      # from source. This is common for packages that are usually installed
      # and managed at the system level.
      buildable: false
      # 'externals' allows specifying an external installation of Slurm to use.
      # This is particularly useful when Slurm is installed by system administrators
      # and is not managed by Spack.
      externals:
        - spec: slurm@22.05.11 %gcc@8.5.0 arch=linux-rhel8-x86_64_v3
          # The 'prefix' specifies the root directory of the external installation
          # of Slurm.
          prefix: /usr
OpenFOAM doesn’t work properly if it was built using OpenMPI version 4.0.0 or later. For example, it doesn’t run in parallel over more than one node. It’s recommended to build OpenFOAM using OpenMPI version 3.1.6.

Then, activate the created environment.

spack env activate -d /path/to/the/spack.yaml

You can install OpenFOAM if the environment was activated successfully. You just run:

spack install

This installs the entire Environment at once.

Once OpenFOAM is installed in the Spack environment, you should only activate the environment that you just created before. Before that, you make sure that the Spack module used to create this environment is loaded.

Running OpenFOAM in Parallel

OpenFOAM utilizes domain decomposition for parallel computing, where the geometry and related fields are segmented and distributed across multiple processors for solving. This parallel computation process entails three main steps: dividing the mesh and fields, executing the application concurrently, and post-processing the segmented case, as detailed in subsequent sections. For parallel execution, OpenFOAM employs the open-source OpenMPI, an implementation of the standard Message Passing Interface (MPI).

For this documentation, you can copy the cavity tutorial in the /path/to/openFOAM/tutorials/incompressible/icoFoam` directory in your desired directory. The cavity folder contains another cavity folder which contains the system folder. The system folder has some configuration files that are used to run applications in parallel.

For example, the decomposeParDict file in OpenFOAM is a configuration file used to define how a computational domain should be decomposed into subdomains for parallel processing. This file is crucial for simulations that run on multiple processors or cores, allowing OpenFOAM to distribute the workload efficiently across the available computational resources. The decomposition is typically performed using the decomposePar utility, which reads the settings from the decomposeParDict file before executing the domain decomposition.

This file looks like the following:

/*--------------------------------*- C++ -*----------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  v2206                                 |
|   \\  /    A nd           | Website:  www.openfoam.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      decomposeParDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

numberOfSubdomains  8;

method  hierarchical;

coeffs
{
    n   (2 2 2);
}


// ************************************************************************* //

This file specifies the number of processors to run the application (8 processors), and the workload is distributed across the processors (2 2 2).

numberOfSubdomains is the number of subdomains into which the domain should be divided, which usually corresponds to the number of processors or cores available for the simulation. The coeffs in decomposeParDict serve to fine-tune the domain decomposition process, ensuring that the workload is distributed effectively across the computational resources.

Make sure that the multiplication of the values of the coeffs equals the value of numberOfSubdomains (2*2*2 = 8).

To run OpenFOAM in parallel on Discovery, create a submission script.

Running OpenFOAM in One Node

First, create a submission script with the following content:

#!/bin/bash
#SBATCH --job-name=openfoam_sim          # Job name
#SBATCH --nodes=1                      # Number of nodes to use
#SBATCH --time=01:00:00                  # Time limit hrs:min:sec
#SBATCH --partition=normal                # Partition on which to run
#SBATCH --output=openfoam_sim_%j.log     # Standard output and error log
#SBATCH --cpus-per-task=2
#SBATCH --ntasks-per-node=8

# Load spack module that you installed before
module load spack/spack-openfoam

# active the spack environment that you created
echo "activating the env ..."
spack env activate -d /path/to/spack env
echo "env activated"

# Move to the directory where your OpenFOAM case is located
cd /path/to/cavity/cavity


# Prepare the case (optional, e.g., meshing)
echo "blockmesh ..."
blockMesh


# Decompose the domain for parallel run
echo "decompose ..."
decomposePar -force
echo "end decompose!"

# Run the application in parallel
srun --ntasks-per-node=$SLURM_NTASKS_PER_NODE --cpus-per-task=$SLURM_CPUS_PER_TASK icoFoam -parallel

This script runs OpenFOAM in parallel in 8 processes over one node.

Then, submit this script using the sbatch command.

Running OpenFOAM in Two Node

First, create a submission script with the following content:

#!/bin/bash
#SBATCH --job-name=openfoam_sim          # Job name
#SBATCH --nodes=2                      # Number of nodes to use
#SBATCH --time=01:00:00                  # Time limit hrs:min:sec
#SBATCH --partition=normal                # Partition on which to run
#SBATCH --output=openfoam_sim_%j.log     # Standard output and error log
#SBATCH --cpus-per-task=2
#SBATCH --ntasks-per-node=4

# Load spack module that you installed before
module load spack/spackTest

# active the spack environment that you created
echo "activating the env ..."
spack env activate -d /path/to/spack env
echo "env activated"

# Move to the directory where your OpenFOAM case is located
cd /path/to/cavity/cavity


# Prepare the case (optional, e.g., meshing)
echo "blockmesh ..."
blockMesh


# Decompose the domain for parallel run
echo "decompose ..."
decomposePar -force
echo "end decompose!"

# Run the application in parallel
srun --ntasks-per-node=$SLURM_NTASKS_PER_NODE --cpus-per-task=$SLURM_CPUS_PER_TASK icoFoam -parallel
the value of the sbatch dirictive --nodes is 2 and the value of --ntasks-per-node is 4. This means that 8 processes will be running on 2 nodes, 4 processes on each node.

Then, submit this script using the sbatch command.