Containers Guide

Containers are being adopted in HPC workloads. Containers rely on existing kernel features to allow greater user control over what applications see and can interact with at any given time. For HPC Workloads, these are usually restricted to the mount namespace. Slurm natively supports the requesting of unprivileged OCI Containers for jobs and steps.

Known limitations

The following is a list of known limitations of the Slurm OCI container implementation.

  • All containers must run under unprivileged (i.e. rootless) invocation. All commands are called by Slurm as the user with no special permissions.
  • Custom container networks are not supported. All containers should work with the "host" network.
  • Slurm will not transfer the OCI container bundle to the execution nodes. The bundle must already exist on the requested path on the execution node.
  • Containers are limited by the OCI runtime used. If the runtime does not support a certain feature, then that feature will not work for any job using a container.
  • oci.conf must be configured on the execution node for the job, otherwise the requested container will be ignored by Slurm (but can be used by the job or any given plugin).

Prerequisites

The host kernel must be configured to allow user land containers:

$ sudo sysctl -w kernel.unprivileged_userns_clone=1

Docker also provides a tool to verify the kernel configuration:

$ dockerd-rootless-setuptool.sh check --force
[INFO] Requirements are satisfied

Required software:

  • Fully functional OCI runtime. It needs to be able to run outside of Slurm first.
  • Fully functional OCI bundle generation tools. Slurm requires OCI Container compliant bundles for jobs.

Example configurations for various OCI Runtimes

The OCI Runtime Specification provides requirements for all compliant runtimes but does not expressly provide requirements on how runtimes will use arguments. In order to support as many runtimes as possible, Slurm provides pattern replacement for commands issued for each OCI runtime operation. This will allow a site to edit how the OCI runtimes are called as needed to ensure compatibility.

For runc and crun, there are two sets of examples provided. The OCI runtime specification only provides the start and create operations sequence, but these runtimes provides a much more efficent run operation. Sites are strongly encouraged to use the run operation (if provided) as the start and create operations require that Slurm poll the OCI runtime to know when the containers have completed execution. While Slurm attempts to be as efficient as possible with polling, it will result in a thread using CPU time inside of the job and slower response of Slurm to catch when container execution is complete.

The examples provided have been tested to work but are only suggestions. The "--root" directory is set as "/tmp/" to avoid certain permission issues but should be changed to a secured and dedicated directory for a production cluster.

  • runc using create/start:
    RunTimeQuery="runc --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeCreate="runc --rootless=true --root=/tmp/ create %n.%u.%j.%s.%t -b %b"
    RunTimeStart="runc --rootless=true --root=/tmp/ start %n.%u.%j.%s.%t"
    RunTimeKill="runc --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="runc --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    
  • runc using run (suggested):
    RunTimeQuery="runc --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeQuery="runc --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeKill="runc --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="runc --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    RunTimeRun="runc --rootless=true --root=/tmp/ run %n.%u.%j.%s.%t -b %b"
    
  • crun using create/start:
    RunTimeQuery="crun --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeKill="crun --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="crun --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    RunTimeCreate="crun --rootless=true --root=/tmp/ create --bundle %b %n.%u.%j.%s.%t"
    RunTimeStart="crun --rootless=true --root=/tmp/ start %n.%u.%j.%s.%t"
    
  • crun using run (suggested):
    RunTimeQuery="crun --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeKill="crun --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="crun --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    RunTimeRun="crun --rootless=true --root=/tmp/ run --bundle %b %n.%u.%j.%s.%t"
    
  • nvidia-container-runtime using create/start:
    RunTimeQuery="nvidia-container-runtime --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeCreate="nvidia-container-runtime --rootless=true --root=/tmp/ create %n.%u.%j.%s.%t -b %b"
    RunTimeStart="nvidia-container-runtime --rootless=true --root=/tmp/ start %n.%u.%j.%s.%t"
    RunTimeKill="nvidia-container-runtime --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="nvidia-container-runtime --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    
  • nvidia-container-runtime using run (suggested):
    RunTimeQuery="nvidia-container-runtime --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeQuery="nvidia-container-runtime --rootless=true --root=/tmp/ state %n.%u.%j.%s.%t"
    RunTimeKill="nvidia-container-runtime --rootless=true --root=/tmp/ kill -a %n.%u.%j.%s.%t"
    RunTimeDelete="nvidia-container-runtime --rootless=true --root=/tmp/ delete --force %n.%u.%j.%s.%t"
    RunTimeRun="nvidia-container-runtime --rootless=true --root=/tmp/ run %n.%u.%j.%s.%t -b %b"
    
  • hpcng singularity v3.8.0:
    OCIRunTimeQuery="sudo singularity oci state %n.%u.%j.%s.%t"
    OCIRunTimeCreate="sudo singularity oci create --bundle %b %n.%u.%j.%s.%t"
    OCIRunTimeStart="sudo singularity oci start %n.%u.%j.%s.%t"
    OCIRunTimeKill="sudo singularity oci kill %n.%u.%j.%s.%t"
    OCIRunTimeDelete="sudo singularity oci delete %n.%u.%j.%s.%t
    
    WARNING: Singuarity (v3.8.0) requires sudo for OCI support, which is a security risk since the user is able to modify these calls. This example is only provided for testing purposes.

Testing OCI runtime outside of Slurm

Slurm calls the OCI runtime directly in the job step. If it fails, then the job will also fail.

  • Go to the directory containing the OCI Container bundle:
    cd $ABS_PATH_TO_BUNDLE
  • Execute OCI Container runtime:
    $OCIRunTime $ARGS create test --bundle $PATH_TO_BUNDLE
    $OCIRunTime $ARGS start test
    $OCIRunTime $ARGS kill test
    $OCIRunTime $ARGS delete test
    If these commands succeed, then the OCI runtime is correctly configured and can be tested in Slurm.

Requesting container jobs or steps

salloc, srun and sbatch (in Slurm 21.08+) have the '--container' argument, which can be used to request container runtime execution. The requested job container will not be inherited by the steps called, excluding the batch and interactive steps.

  • Batch step inside of container:
    sbatch --container $ABS_PATH_TO_BUNDLE --wrap 'bash -c "cat /etc/*rel*"'
    
  • Batch job with step 0 inside of container:
    sbatch --wrap 'srun bash -c "--container $ABS_PATH_TO_BUNDLE cat /etc/*rel*"'
    
  • Interactive step inside of container:
    salloc --container $ABS_PATH_TO_BUNDLE bash -c "cat /etc/*rel*"
  • Interactive job step 0 inside of container:
    salloc srun --container $ABS_PATH_TO_BUNDLE bash -c "cat /etc/*rel*"
    
  • Job with step 0 inside of container:
    srun --container $ABS_PATH_TO_BUNDLE bash -c "cat /etc/*rel*"
  • Job with step 1 inside of container:
    srun srun --container $ABS_PATH_TO_BUNDLE bash -c "cat /etc/*rel*"
    

OCI Container bundle

There are multiple ways to generate an OCI Container bundle. The instructions below are the method we found the easiest. The OCI standard provides the requirements for any given bundle: Filesystem Bundle

Here are instructions on how to generate a container using a few different container solutions:

  • Use an existing tool to create a filesystem image in /image/rootfs:
    • debootstrap:
      sudo debootstrap stable /tmp/image/rootfs http://deb.debian.org/debian/
    • yum:
      sudo yum --config /etc/yum.conf --installroot=/image/rootfs/ --nogpgcheck --releasever=${CENTOS_RELEASE} -y
    • docker:
      docker pull alpine
      docker create --name alpine alpine
      docker export alpine | tar -C /image/rootfs/ -xf -
      docker rm alpine
  • Configuring a Bundle for Runtime to execute:
    • Use runc to generate a config.json:
      • cd /tmp/image
        runc --rootless=true spec
      • The generated config.json from runc (currently) needs to be modifed for rootless:
        diff --git a/config.json.orig b/config.json
        index 14f7ad3..c48acb9 100644
        --- a/config.json.orig
        +++ b/config.json
        @@ -81,8 +81,7 @@
                                        "noexec",
                                        "newinstance",
                                        "ptmxmode=0666",
        -                               "mode=0620",
        -                               "gid=5"
        +                               "mode=0620"
                                ]
                        },
                        {
        @@ -132,6 +131,20 @@
                        }
                ],
                "linux": {
        +               "uidMappings": [
        +                       {
        +                               "containerID": 0,
        +                               "hostID": 1000,
        +                               "size": 1
        +                       }
        +               ],
        +               "gidMappings": [
        +                       {
        +                               "containerID": 0,
        +                               "hostID": 1000,
        +                               "size": 1
        +                       }
        +               ],
                        "resources": {
                                "devices": [
                                        {
        @@ -155,6 +168,9 @@
                                },
                                {
                                        "type": "mount"
        +                       },
        +                       {
        +                               "type": "user"
                                }
                        ],
                        "maskedPaths": [
    • Use umoci and skopeo to generate a full image:
      cd /tmp/
      skopeo copy docker://alpine:latest oci:alpine:latest
      umoci unpack --rootless --image alpine alpine
    • Use singularity to generate a full image:
      sudo singularity sif dump alpine /tmp/alpine.sif
      sudo singularity oci mount /tmp/alpine.sif /tmp/image/

Container support via Plugin

Slurm also allows container developers to create SPANK Plugins that can be called at various points of job execution to support containers. Slurm is generally agnostic to SPANK based containers and can be made to start most, if not all, types. Any site using a plugin to start containers should not create or configure the "oci.conf" configuration file to deactive the OCI container functionality.

Some container developers have choosen a command line interface only which requires users to explicitly execute the container solution.

Links to several third party container solutions are provided below:

Please note this list is not exhaustive as new containers types are being created all the time.


Container Types

Charliecloud

Charliecloud is user namespace container system sponsored by LANL to provide HPC containers. Charliecloud supports the following:

Docker (running as root)

Docker currently has multiple design points that make it unfriendly to HPC systems. The issue that usually stops most sites from using Docker is the requirement of "only trusted users should be allowed to control your Docker daemon" [Docker Security] which is not acceptable to most HPC systems.

Sites with trusted users can add them to the docker Unix group and allow them control Docker directly from inside of jobs. There is currently no direct support for starting or stopping docker containers in Slurm.

Sites are recommended to extract the container image from docker (procedure above) and then run the containers using Slurm.

UDOCKER

UDOCKER is Docker feature subset clone that is designed to allow execution of docker commands without increased user privileges.

Rootless Docker

Rootless Docker (>=v20.10) requires no extra permissions for users and currently (as of January 2021) has no known security issues with users gaining privileges. Each user will need to run an instance of the dockerd server on each node of the job in order to use docker. There are currently no helper scripts or plugins for Slurm to automate the build up or tear down the docker daemons.

Sites are recommended to extract the container image from docker (procedure above) and then run the containers using Slurm.

Kubernetes Pods (k8s)

Kubernetes is a container orchestration system that uses PODs, which are generally a logical grouping of containers for singular purpose.

There is currently no direct support for Kubernetes Pods in Slurm. Sites are encouraged to extract the OCI image from Kubernetes and then run the containers using Slurm. Users can create create jobs that start together using the "--dependency=" agument in sbatch to mirror the functionality of Pods. Users can also use a larger allocation and then start each pod as a parallel step using srun.

Shifter

Shifter is a container project out of NERSC to provide HPC containers with full scheduler integration.

Singularity

Singularity is hybrid container system that supports:

  • Slurm integration (for singularity v2.x) via Plugin. A full description of the plugin was provided in the SLUG17 Singularity Presentation.
  • User namespace containers via sandbox mode that require no additional permissions.
  • Users directly calling singularity via setuid executable outside of Slurm.

ENROOT

Enroot is a user namespace container system sponsored by NVIDIA that supports:

  • Slurm integration via pyxis
  • Native support for Nvidia GPUs
  • Faster Docker image imports

Podman

Podman is a user namespace container system sponsored by Redhat/IBM that supports:

  • Drop in replacement of Docker.
  • Called directly by users. (Currently lacks direct Slurm support).
  • Rootless image building via buildah
  • Native OCI Image support

Sarus

Sarus is a privileged container system sponsored by ETH Zurich CSCS that supports:

Overview slides of Sarus are here.


Last modified 5 August 2021