Rolling updates in Docker - Time & Space Complexity
When updating applications with Docker, rolling updates help avoid downtime by updating containers gradually.
We want to understand how the update time grows as the number of containers increases.
Analyze the time complexity of this rolling update command.
docker service update \
--image myapp:v2 \
--update-parallelism 2 \
--update-delay 10s \
myapp_service
This command updates the service containers two at a time, waiting 10 seconds between batches.
Look at what repeats during the update process.
- Primary operation: Updating a batch of containers in parallel.
- How many times: Number of batches equals total containers divided by batch size (parallelism).
As the number of containers grows, the update time grows in steps based on batch size and delay.
| Input Size (n) | Approx. Operations (batches) |
|---|---|
| 10 containers | 5 batches (2 containers each) |
| 100 containers | 50 batches |
| 1000 containers | 500 batches |
Pattern observation: The total update time grows roughly linearly with the number of containers.
Time Complexity: O(n)
This means the update time increases directly in proportion to the number of containers.
[X] Wrong: "Updating containers in parallel means update time stays the same no matter how many containers there are."
[OK] Correct: Even with parallel updates, batches happen one after another, so more containers mean more batches and more total time.
Understanding how rolling updates scale helps you design smooth deployments that keep apps running without interruption.
What if we increased the update parallelism from 2 to 5? How would the time complexity change?