podman logo

Podman in HPC environments

By Adrian Reber GitHub Twitter

A High-Performance Computing (HPC) environment can mean a lot of things, but in this article I want to focus on running Message Passing Interface (MPI) parallelized programs with the help of Podman.

The following is a simple MPI based example taken from Open MPI: ring.c

To use it on a Fedora 30 system I first installed Open MPI and then I compiled the example:

$ sudo dnf install openmpi-devel
$ module load mpi/openmpi-x86_64
$ echo "module load mpi/openmpi-x86_64" >> .bashrc
$ mpicc -o ring ring.c

Running this on my test system (Fedora 30) with 4 CPUs gives me this:

$ mpirun ./ring
Rank 3 has cleared MPI_Init
Rank 1 has cleared MPI_Init
Rank 2 has cleared MPI_Init
Rank 0 has cleared MPI_Init
Rank 1 has completed ring
Rank 2 has completed ring
Rank 3 has completed ring
Rank 0 has completed ring
Rank 3 has completed MPI_Barrier
Rank 1 has completed MPI_Barrier
Rank 0 has completed MPI_Barrier
Rank 2 has completed MPI_Barrier

To be able to use Podman in combination with mpirun I created a container with the following definition:

$ cat Dockerfile
FROM registry.fedoraproject.org/fedora:30

RUN dnf -y install openmpi && \
    dnf clean all

COPY ring /home/ring

After building the container (podman build --tag=mpi-test:31 .) I pushed the container to the quay.io container registry (podman push mpi-test:31 quay.io/adrianreber/mpi-test:31) and can now pull it like this:

$ podman pull quay.io/adrianreber/mpi-test:30

And then I can run mpirun to start multiple containers. In my case 4 containers are started as each of the two involved systems has 2 CPUs:

$ mpirun --hostfile hostfile \
   --mca orte_tmpdir_base /tmp/podman-mpirun \
   podman run --env-host \
     -v /tmp/podman-mpirun:/tmp/podman-mpirun \
     --userns=keep-id \
     --net=host --pid=host --ipc=host \
     quay.io/adrianreber/mpi-test:30 /home/ring
Rank 2 has cleared MPI_Init
Rank 2 has completed ring
Rank 2 has completed MPI_Barrier
Rank 3 has cleared MPI_Init
Rank 3 has completed ring
Rank 3 has completed MPI_Barrier
Rank 1 has cleared MPI_Init
Rank 1 has completed ring
Rank 1 has completed MPI_Barrier
Rank 0 has cleared MPI_Init
Rank 0 has completed ring
Rank 0 has completed MPI_Barrier

Now mpirun starts up 4 Podman containers and each container is running one instance of ring. All 4 processes are communicating over MPI with each other.

The following mpirun options are used:

This is it for all the necessary parameters for mpirun, now the command is specified that mpirun should start; podman in this case.

Thanks to Podman’s fork-exec model it is really simple to use it in combination with Open MPI as Open MPI will start Podman just as it would start the actual MPI application.