0
0
MLOpsdevops~10 mins

Kubernetes for ML workloads in MLOps - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Kubernetes for ML workloads
Prepare ML Model Container
Create Kubernetes Deployment
Kubernetes Scheduler Assigns Pod
Pod Runs ML Container
Model Serves Predictions
Monitor & Scale Pods Based on Load
Update Model or Config
This flow shows how an ML model container is deployed on Kubernetes, scheduled as pods, serves predictions, and scales based on demand.
Execution Sample
MLOps
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ml-model
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ml-model
  template:
    metadata:
      labels:
        app: ml-model
    spec:
      containers:
      - name: model
        image: mlmodel:latest
        ports:
        - containerPort: 5000
This YAML deploys 2 replicas of an ML model container on Kubernetes, exposing port 5000 for predictions.
Process Table
StepActionKubernetes ComponentResult
1Apply Deployment YAMLkubectlDeployment 'ml-model' created with 2 replicas
2Scheduler assigns pods to nodesKubernetes Scheduler2 pods scheduled on available nodes
3Pods start containersKubeletML model containers running and listening on port 5000
4Service routes trafficKubernetes ServiceRequests to model are load balanced across pods
5Monitor loadHorizontal Pod AutoscalerPods scaled up/down based on CPU usage
6Update Deployment with new imagekubectlRolling update triggers new pods with updated model
7Old pods terminatedKubernetes ControllerDeployment updated successfully
8End-Model serving stable with desired replicas
💡 Deployment reaches desired state with pods running and serving predictions
Status Tracker
VariableStartAfter Step 1After Step 3After Step 5Final
Deployment replicas0223 (scaled up)3
Pods running00233
Model versionnonev1 (mlmodel:latest)v1v1v2 (after update)
Key Moments - 3 Insights
Why do we see pods starting only after the scheduler assigns them?
Because Kubernetes first decides which nodes will run the pods (Step 2), only then the kubelet on those nodes starts the containers (Step 3). This ensures pods run on suitable nodes.
How does scaling happen automatically when load increases?
The Horizontal Pod Autoscaler monitors CPU usage (Step 5) and increases pod replicas when usage is high, as shown by the increase from 2 to 3 pods in the variable tracker.
What happens during a rolling update of the ML model?
When a new image is applied (Step 6), Kubernetes creates new pods with the updated model and gradually terminates old pods (Step 7) to avoid downtime.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, at which step do pods start running the ML containers?
AStep 2
BStep 5
CStep 3
DStep 1
💡 Hint
Check the 'Pods start containers' action in the execution table at Step 3
According to the variable tracker, how many pods are running after scaling?
A2
B3
C1
D0
💡 Hint
Look at the 'Pods running' row after Step 5 in the variable tracker
If the deployment YAML changes the image to a new version, what happens next according to the execution table?
ARolling update triggers new pods with updated model
BNothing changes until manual restart
CPods are immediately deleted
DScheduler assigns pods to nodes
💡 Hint
See Step 6 in the execution table about updating deployment with new image
Concept Snapshot
Kubernetes for ML workloads:
- Package ML model as container image
- Create Deployment YAML with replicas
- Apply YAML to create pods running model
- Use Service to route prediction requests
- Autoscale pods based on load
- Update Deployment for new model versions
- Rolling updates avoid downtime
Full Transcript
This visual execution shows how Kubernetes manages ML workloads by deploying containerized models as pods. First, the ML model is packaged into a container image. Then a Deployment YAML specifies how many replicas to run. Applying this YAML creates a Deployment resource. The Kubernetes scheduler assigns pods to nodes, and kubelets start the containers. A Service load balances prediction requests to pods. The Horizontal Pod Autoscaler monitors load and scales pods up or down automatically. When a new model version is available, updating the Deployment triggers a rolling update, replacing old pods with new ones without downtime. Variables like pod count and model version change step-by-step, helping beginners understand the process clearly.