GPU Containers

Apptainer supports running application containers that use NVIDIA’s CUDA GPU compute framework.

The following sections show how to build and run a GPU container on Discovery.

For more detail, refer to Apptainer GPU Support.

Using GPU Containers

The first step to start using GPU containers is to pull and build containers as explained in Building Containers page. You can also use GPU images as a base image in a def file as shown in Build Recipe page.

Docker Hub provides many NVIDA GPU supporting containers that use CUDA and cuDNN for processing. For more NVIDA containers, visit Docker Hub - NVIDA Containers.

As an example, to download and convert a GPU container nvidia/cuda:10.1-cudnn8-devel-ubuntu16.04 to a SIF file named cuda.sif, run:

apptainer build cuda.sif  docker://

There are different ways where you can interact with GPU containers as shown in Using Containers page.

For this documentation, use a CUDA program named from GPU Jobs.

To compile the program, run:

apptainer exec --nv cuda.sif nvcc -o stat
--nv is an Apptainer flag required to enable Nvidia support.

To run the program stat, run:

$ apptainer exec --nv cuda.sif ./stat
CUDA Device #356482
Name                         - Tesla V100-PCIE-32GB
Total global memory          - 22024192 MB
Total constant memory        - 18014261072635881 KB
Shared memory per block      - 0 KB
Total registers per block    - 2
Maximum threads per block    - 2137020592
Clock rate                   - 32767
Number of multi-processors   - 0