Challenge - 5 Problems
Docker Mastery for ML
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of Dockerfile RUN command
What will be the output when building this Dockerfile snippet?
ML Python
FROM python:3.8-slim RUN echo "Hello from Docker" > /message.txt RUN cat /message.txt
Attempts:
2 left
💡 Hint
RUN commands execute during build and output their results in the build logs.
✗ Incorrect
The RUN command executes shell commands during image build. The first RUN writes the message, the second RUN reads and outputs it.
❓ Model Choice
intermediate2:00remaining
Choosing the best Docker base image for ML model deployment
You want to deploy a TensorFlow model with GPU support in a Docker container. Which base image is the best choice?
Attempts:
2 left
💡 Hint
Look for images that include TensorFlow and GPU support.
✗ Incorrect
The tensorflow/tensorflow:latest-gpu image includes TensorFlow with GPU support pre-installed, making it ideal for deploying GPU models.
❓ Hyperparameter
advanced2:00remaining
Optimizing Docker container for faster ML training
Which Dockerfile instruction helps reduce image size and speeds up ML training container startup?
Attempts:
2 left
💡 Hint
Fewer layers in Docker images reduce size and improve startup.
✗ Incorrect
Combining RUN commands with && creates fewer layers, reducing image size and improving container startup time.
🔧 Debug
advanced2:00remaining
Debugging Docker container failing to access GPU
You built a Docker container for ML training with GPU support, but inside the container, the GPU is not detected. What is the most likely cause?
Attempts:
2 left
💡 Hint
GPU access requires special runtime flags when running containers.
✗ Incorrect
To access GPUs inside a container, you must run it with the --gpus flag; otherwise, the container cannot see the GPU hardware.
❓ Metrics
expert2:00remaining
Measuring Docker container startup time impact on ML inference latency
You want to measure how Docker container startup time affects ML model inference latency in production. Which approach is best?
Attempts:
2 left
💡 Hint
Inference latency should reflect only the model prediction time during normal operation.
✗ Incorrect
Inference latency metrics typically exclude container startup time because startup happens once, not per request.