Kubernetes uses a reconciliation loop to manage resources. What does this loop do?
Think about how Kubernetes keeps your apps running as you want.
The reconciliation loop ensures the cluster matches the desired state by detecting differences and fixing them automatically.
You manually delete a pod that is part of a Deployment. What will 'kubectl get pods' show shortly after?
kubectl delete pod <pod-name> kubectl get pods
Think about how Deployments maintain the number of pods.
Deployments ensure the desired number of pods. When one is deleted, the controller creates a new pod to replace it.
Choose the YAML snippet that correctly sets a Deployment to have 3 replicas of an nginx container.
Check apiVersion, kind, and selector format carefully.
Option A uses correct apiVersion, kind, replicas as integer, and proper selector.matchLabels format required for Deployment.
You set replicas: 5 in your Deployment, but 'kubectl get pods' shows only 3 pods running and ready. What could cause this?
Check pod status and events for crash loops.
If pods crash due to invalid images, Kubernetes keeps trying but pods never become ready, so desired count is not met.
Put these steps in the order Kubernetes performs them when you update a Deployment manifest to change replicas from 2 to 4.
Think about how the update flows from manifest to running pods.
The API server first receives the change, then the controller compares states, creates pods, and finally pods run on nodes.