Complete the code to specify the GPU runtime when running a Docker container.
docker run --gpus [1] nvidia/cuda:11.0-base nvidia-smi
Using --gpus all allows the container to access all GPUs on the host.
Complete the Dockerfile line to install NVIDIA CUDA toolkit inside the container.
RUN apt-get update && apt-get install -y [1]Installing cuda-toolkit-11-0 provides CUDA support inside the container.
Fix the error in the Docker run command to enable GPU support.
docker run --runtime=[1] nvidia/cuda:11.0-base nvidia-smi
The correct runtime for NVIDIA GPU support is nvidia.
Fill both blanks to create a Docker Compose service with GPU support.
services:
gpu-service:
image: nvidia/cuda:11.0-base
deploy:
resources:
reservations:
devices:
- driver: [1]
count: [2]
capabilities: [gpu]The device driver should be nvidia and count specifies how many GPUs to reserve, e.g., 2.
Fill all three blanks to write a Dockerfile snippet that sets environment variables for CUDA and runs a GPU test.
ENV CUDA_VERSION=[1] ENV PATH=/usr/local/cuda-[2]/bin:${PATH} RUN nvidia-smi --query-gpu=name,memory.total --format=csv > [3]
Set CUDA version to 11.0, update PATH accordingly, and save GPU info to /tmp/gpu_info.csv.