Rolling update strategy in Kubernetes - Time & Space Complexity
When Kubernetes updates an application, it replaces old versions with new ones gradually. Understanding how long this process takes helps us plan and manage updates smoothly.
We want to know how the update time grows as the number of pods increases.
Analyze the time complexity of the following Kubernetes rolling update configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 3
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: app-container
image: example-image:v2
This config updates 10 pods gradually, allowing up to 3 extra pods and 2 pods down at once.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Updating pods in batches during the rolling update.
- How many times: The number of batches depends on total pods and maxUnavailable/maxSurge settings.
As the number of pods (n) grows, the update happens in groups limited by maxUnavailable and maxSurge.
| Input Size (n) | Approx. Number of Update Batches |
|---|---|
| 10 | About 4 batches |
| 100 | About 34 batches |
| 1000 | About 334 batches |
Pattern observation: The number of batches grows roughly linearly with the number of pods.
Time Complexity: O(n)
This means the update time grows roughly in direct proportion to the number of pods being updated.
[X] Wrong: "The rolling update time stays the same no matter how many pods there are."
[OK] Correct: Because pods update in batches limited by maxUnavailable and maxSurge, more pods mean more batches and more time.
Understanding how rolling updates scale helps you manage real applications smoothly and shows you can think about system behavior as it grows.
"What if maxUnavailable was set to 1 instead of 2? How would the time complexity change?"