Updating ConfigMaps and propagation in Kubernetes - Time & Space Complexity
When we update ConfigMaps in Kubernetes, we want to know how the time to apply changes grows as the number of pods using the ConfigMap increases.
We ask: How does updating and propagating ConfigMap changes scale with more pods?
Analyze the time complexity of the following Kubernetes commands.
kubectl create configmap app-config --from-file=config.yaml
kubectl apply -f deployment.yaml
kubectl rollout restart deployment/my-app
This sequence creates a ConfigMap, applies a deployment that uses it, and restarts pods to pick up changes.
Look for repeated actions that affect time.
- Primary operation: Restarting each pod in the deployment to load the new ConfigMap.
- How many times: Once per pod in the deployment, sequentially or in batches.
As the number of pods grows, the time to restart all pods grows too.
| Input Size (pods) | Approx. Operations (pod restarts) |
|---|---|
| 10 | 10 restarts |
| 100 | 100 restarts |
| 1000 | 1000 restarts |
Pattern observation: The time grows directly with the number of pods to restart.
Time Complexity: O(n)
This means the time to update and propagate ConfigMap changes grows linearly with the number of pods.
[X] Wrong: "Updating a ConfigMap automatically updates all pods instantly without extra time."
[OK] Correct: Pods must restart or reload to see changes, so time depends on how many pods need updating.
Understanding how updates scale helps you design systems that handle changes smoothly as they grow.
What if we used a sidecar container to watch ConfigMap changes and reload pods without restarting? How would the time complexity change?