0
0
Kubernetesdevops~10 mins

Canary deployments in Kubernetes - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Canary deployments
Start: New version ready
Deploy small subset of pods with new version
Monitor performance and errors
Increase new
version pods
Fix issues
Repeat until
full rollout
Done
The flow shows deploying a small part of the app with the new version, monitoring it, then either increasing rollout or rolling back based on results.
Execution Sample
Kubernetes
kubectl apply -f deployment-v2.yaml
kubectl set image deployment/myapp myapp=myapp:v2 --record
kubectl rollout status deployment/myapp
kubectl get pods -l app=myapp
# Monitor logs and metrics
kubectl rollout undo deployment/myapp
This sequence deploys a new version, updates image for canary, checks rollout status, monitors pods, and can rollback if needed.
Process Table
StepActionPods with old versionPods with new versionResultNext Step
1Deploy initial canary pods with new version911 pod runs new version, 9 oldMonitor performance
2Monitor logs and metrics91No errors detectedIncrease new version pods
3Scale new version pods to 373More pods running new versionMonitor performance
4Monitor logs and metrics73Minor errors detectedDecide to rollback or continue
5Rollback to old version100All pods back to old versionFix issues and retry later
6End100Deployment stable with old versionStop
💡 Rollback triggered due to errors in canary pods, deployment reverted to old version
Status Tracker
VariableStartAfter Step 1After Step 3After Step 5Final
Pods with old version10971010
Pods with new version01300
Errors detectedNoNoYes (minor)N/AN/A
Key Moments - 3 Insights
Why do we start with only a few pods running the new version instead of all at once?
Starting with a small number limits risk if the new version has bugs. Execution table step 1 shows only 1 pod updated to catch issues early.
What happens if errors are detected during monitoring?
If errors appear, the deployment can be rolled back to the old version to keep the app stable, as shown in step 5 of the execution table.
Why do we monitor performance after increasing new version pods?
Increasing new pods gradually helps confirm stability at larger scale before full rollout, shown in steps 3 and 4 where monitoring guides next action.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, how many pods run the new version after step 3?
A3
B1
C7
D10
💡 Hint
Check the 'Pods with new version' column at step 3 in the execution table.
At which step does the deployment rollback to the old version?
AStep 4
BStep 5
CStep 2
DStep 6
💡 Hint
Look for the row mentioning rollback in the 'Action' column of the execution table.
If no errors were detected at step 4, what would likely happen next?
ARollback to old version
BStop deployment
CIncrease new version pods further
DDelete all pods
💡 Hint
Refer to the 'Result' and 'Next Step' columns at step 4 in the execution table.
Concept Snapshot
Canary deployments gradually roll out new app versions.
Start with a small subset of pods running new code.
Monitor performance and errors carefully.
If stable, increase new pods until full rollout.
If errors occur, rollback to old version.
This reduces risk of bad releases.
Full Transcript
Canary deployments in Kubernetes involve deploying a new version of an application to a small number of pods first. This limits risk by exposing only a small part of the system to potential bugs. After deploying the initial canary pods, you monitor logs and metrics to check for errors or performance issues. If everything looks good, you increase the number of pods running the new version step by step, monitoring at each stage. If errors are detected, you rollback the deployment to the old stable version to keep the application reliable. This process repeats until the new version is fully rolled out or fixed. The execution table shows each step with pod counts and decisions, helping visualize how canary deployments work in practice.