0
0
MLOpsdevops~5 mins

GPU support in containers in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Sometimes, machine learning tasks need powerful graphics cards called GPUs to run faster. Running these tasks inside containers can be tricky because the container needs to use the GPU hardware on the computer. GPU support in containers solves this by letting containers access GPUs safely and easily.
When you want to train a machine learning model inside a container and need faster processing using a GPU.
When you want to run deep learning inference in a container that requires GPU acceleration.
When you want to share GPU resources between multiple containerized applications without conflicts.
When you want to package your ML app with all dependencies and GPU support for easy deployment.
When you want to test GPU-enabled ML code in a consistent environment across different machines.
Config File - Dockerfile
Dockerfile
FROM nvidia/cuda:12.1.1-runtime-ubuntu22.04

RUN apt-get update && apt-get install -y python3 python3-pip

COPY requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt

COPY . /app

CMD ["python3", "train.py"]

This Dockerfile uses an official NVIDIA CUDA base image that includes GPU drivers and libraries needed for GPU tasks.

It installs Python and pip, then copies and installs Python dependencies from requirements.txt.

The application code is copied into the container, and the default command runs the training script.

Commands
This command builds the Docker image named 'gpu-ml-app' from the Dockerfile in the current directory. It packages the app with GPU support libraries.
Terminal
docker build -t gpu-ml-app .
Expected OutputExpected
[+] Building 12.3s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.1s => [internal] load .dockerignore 0.0s => [internal] load metadata for nvidia/cuda:12.1.1-runtime-ubuntu22.04 1.2s => [1/7] FROM nvidia/cuda:12.1.1-runtime-ubuntu22.04 0.0s => [2/7] RUN apt-get update && apt-get install -y python3 python3-pip 8.5s => [3/7] COPY requirements.txt /app/requirements.txt 0.0s => [4/7] WORKDIR /app 0.0s => [5/7] RUN pip3 install -r requirements.txt 2.3s => [6/7] COPY . /app 0.1s => [7/7] CMD ["python3", "train.py"] 0.0s => exporting to image 0.1s => writing image sha256:abc123def456... 0.0s => naming to docker.io/library/gpu-ml-app 0.0s
This command runs the container with access to all GPUs on the host. The '--gpus all' flag enables GPU support inside the container.
Terminal
docker run --gpus all --rm gpu-ml-app
Expected OutputExpected
Epoch 1/10 Loss: 0.45 Epoch 2/10 Loss: 0.38 Training complete.
--gpus all - Allows the container to access all GPUs on the host machine.
--rm - Automatically removes the container after it stops to keep the system clean.
This command shows the status of NVIDIA GPUs on the host machine, including usage and running processes. It helps verify that GPUs are available.
Terminal
nvidia-smi
Expected OutputExpected
Tue Jun 20 12:00:00 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA RTX 3090 On | 00000000:01:00.0 Off | N/A | | 30% 45C P8 20W / 350W | 500MiB / 24576MiB | 10% Default | +-------------------------------+----------------------+----------------------+
Key Concept

If you remember nothing else from this pattern, remember: use the '--gpus' flag with Docker and a CUDA base image to enable GPU access inside containers.

Common Mistakes
Not using the '--gpus' flag when running the container.
Without this flag, the container cannot access the GPU hardware, so GPU-accelerated code will fail or run on CPU only.
Always add '--gpus all' or specify GPUs explicitly when running GPU-dependent containers.
Using a base image without CUDA or NVIDIA drivers.
The container will lack necessary GPU libraries, causing errors when trying to use the GPU.
Use official NVIDIA CUDA base images that include GPU drivers and libraries.
Running 'nvidia-smi' inside a container without GPU support enabled.
'nvidia-smi' will fail or show no GPUs because the container cannot see the GPU hardware.
Run containers with '--gpus' flag and verify GPU availability on the host with 'nvidia-smi'.
Summary
Build a Docker image using an NVIDIA CUDA base image to include GPU support libraries.
Run the container with the '--gpus all' flag to enable GPU access inside the container.
Use 'nvidia-smi' on the host to check GPU status and confirm availability.