0
0
MLOpsdevops~10 mins

GPU support in containers in MLOps - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to specify the GPU runtime when running a Docker container.

MLOps
docker run --gpus [1] nvidia/cuda:11.0-base nvidia-smi
Drag options to blanks, or click blank then click option'
Acpu
Bnone
Call
Ddefault
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'none' disables GPU access.
Using 'cpu' is invalid for GPU runtime.
Using 'default' does not specify GPU usage.
2fill in blank
medium

Complete the Dockerfile line to install NVIDIA CUDA toolkit inside the container.

MLOps
RUN apt-get update && apt-get install -y [1]
Drag options to blanks, or click blank then click option'
Apython3
Bgit
Cnginx
Dcuda-toolkit-11-0
Attempts:
3 left
💡 Hint
Common Mistakes
Installing unrelated packages like python3 or nginx.
Forgetting to update apt-get before installing.
3fill in blank
hard

Fix the error in the Docker run command to enable GPU support.

MLOps
docker run --runtime=[1] nvidia/cuda:11.0-base nvidia-smi
Drag options to blanks, or click blank then click option'
Adocker
Bnvidia
Cdefault
Dgpu
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'docker' or 'default' runtimes disables GPU support.
Using 'gpu' is not a valid runtime name.
4fill in blank
hard

Fill both blanks to create a Docker Compose service with GPU support.

MLOps
services:
  gpu-service:
    image: nvidia/cuda:11.0-base
    deploy:
      resources:
        reservations:
          devices:
            - driver: [1]
              count: [2]
              capabilities: [gpu]
Drag options to blanks, or click blank then click option'
Anvidia
Ball
C2
Ddefault
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'default' as driver disables GPU support.
Using 'all' as count is invalid; count expects a number.
5fill in blank
hard

Fill all three blanks to write a Dockerfile snippet that sets environment variables for CUDA and runs a GPU test.

MLOps
ENV CUDA_VERSION=[1]
ENV PATH=/usr/local/cuda-[2]/bin:${PATH}
RUN nvidia-smi --query-gpu=name,memory.total --format=csv > [3]
Drag options to blanks, or click blank then click option'
A11.0
C/tmp/gpu_info.csv
D/var/log/gpu.log
Attempts:
3 left
💡 Hint
Common Mistakes
Mismatching CUDA versions in ENV variables.
Saving output to a non-writable or unrelated file path.