Which of the following Docker run commands correctly enables GPU support for a container?
Think about the official Docker flag that grants GPU access.
The correct flag to enable GPU support in Docker is --gpus. Option C uses --gpus all which grants access to all GPUs. Other options use incorrect or non-existent flags.
What is the output of running docker run --gpus all nvidia/cuda:11.0-runtime nvidia-smi on a system with one NVIDIA GPU?
Consider what nvidia-smi shows when GPU is accessible.
When the container has GPU access, nvidia-smi shows detailed GPU info. If GPUs are not accessible, it shows errors or no devices.
Which configuration file must be modified to enable the NVIDIA Container Toolkit to automatically provide GPU support for Docker containers?
Think about Docker daemon configuration files.
The NVIDIA Container Toolkit requires adding a runtime entry in /etc/docker/daemon.json to enable GPU support automatically. Other files are unrelated or do not exist.
You run docker run --gpus all nvidia/cuda:11.0-base nvidia-smi but get the error nvidia-smi: command not found. What is the most likely cause?
Consider what is inside the container image.
If nvidia-smi is missing, it means the container image lacks the tool. The nvidia/cuda:11.0-base image is minimal and may not include it. GPU drivers on host or Docker daemon status would cause different errors.
What is the correct order of steps to enable GPU support for a Kubernetes pod?
Think about preparing nodes first, then enabling Kubernetes to manage GPUs, then using them in pods.
First, install drivers on nodes. Then deploy the NVIDIA device plugin so Kubernetes can schedule GPUs. Next, create pods requesting GPUs. Finally, verify GPU access inside pods.