Challenge - 5 Problems
Compute Resource Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
💻 Command Output
intermediate2:00remaining
Output of Kubernetes resource request command
What is the output of the following command when run on a pod named
ml-training-pod in Kubernetes?kubectl get pod ml-training-pod -o jsonpath='{.spec.containers[0].resources.requests}'Attempts:
2 left
💡 Hint
Resource requests define the minimum compute resources a container needs.
✗ Incorrect
The command extracts the resource requests for the first container in the pod. Option A shows a typical request of 500 millicores CPU and 1 GiB memory.
🧠 Conceptual
intermediate2:00remaining
Understanding GPU resource allocation in MLOps
In an MLOps pipeline, why is it important to specify GPU resource limits for training jobs?
Attempts:
2 left
💡 Hint
Think about resource sharing in a multi-tenant environment.
✗ Incorrect
Specifying GPU limits ensures that a training job does not consume more GPU resources than allocated, which helps maintain fairness and stability when multiple jobs run simultaneously.
🔀 Workflow
advanced3:00remaining
Order the steps to configure autoscaling for compute resources in a Kubernetes cluster
Arrange the following steps in the correct order to enable autoscaling of pods based on CPU usage in Kubernetes.
Attempts:
2 left
💡 Hint
Metrics collection must be ready before autoscaler can use metrics.
✗ Incorrect
First, metrics server must be deployed to provide CPU usage data. Then resource requests must be set so autoscaler knows what to measure. Next, create the HPA to define scaling rules. Finally, verify the scaling behavior.
❓ Troubleshoot
advanced2:00remaining
Identify the cause of pod scheduling failure due to resource constraints
A pod in Kubernetes fails to schedule with the message:
0/5 nodes are available: 5 Insufficient cpu. What is the most likely cause?Attempts:
2 left
💡 Hint
Focus on the error message about CPU availability.
✗ Incorrect
The error indicates no node has enough free CPU to meet the pod's CPU request, causing scheduling failure.
✅ Best Practice
expert2:30remaining
Best practice for managing compute resources in multi-tenant MLOps environments
Which practice best ensures fair and efficient compute resource usage among multiple teams running ML workloads on shared infrastructure?
Attempts:
2 left
💡 Hint
Think about automated controls to prevent resource hogging.
✗ Incorrect
Resource quotas and limit ranges enforce boundaries on resource usage per team, preventing any single team from exhausting cluster resources and ensuring fair sharing.