Bird
0
0

After updating a ConfigMap with kubectl apply -f configmap.yaml, pods still use old values. What is the recommended next step to propagate changes without deleting pods manually?

medium📝 Troubleshoot Q7 of 15
Kubernetes - ConfigMaps
After updating a ConfigMap with kubectl apply -f configmap.yaml, pods still use old values. What is the recommended next step to propagate changes without deleting pods manually?
ARun <code>kubectl delete pods --all</code> to force pod recreation
BTrigger a rolling restart of the deployment using <code>kubectl rollout restart</code>
CEdit the pod spec to update the ConfigMap reference manually
DWait for kubelet to automatically restart pods
Step-by-Step Solution
Solution:
  1. Step 1: Understand ConfigMap update effect

    Pods do not reload ConfigMap changes automatically for env vars or volume mounts.
  2. Step 2: Use rolling restart

    Using kubectl rollout restart triggers pods to restart gracefully, picking up new ConfigMap data.
  3. Final Answer:

    Trigger a rolling restart of the deployment using kubectl rollout restart -> Option B
  4. Quick Check:

    Rolling restart updates pods without downtime [OK]
Quick Trick: Use rollout restart to refresh pods after ConfigMap update [OK]
Common Mistakes:
  • Deleting pods manually causing downtime
  • Expecting automatic pod restart by kubelet
  • Editing pod spec directly instead of deployment

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More Kubernetes Quizzes