Why containers matter in Docker - Performance Analysis
We want to understand how the time it takes to work with containers changes as we add more containers or tasks.
How does managing containers grow in cost when the number of containers increases?
Analyze the time complexity of the following Docker commands.
# Start multiple containers from an image
for i in $(seq 1 5); do
docker run -d --name container_$i nginx
docker logs container_$i
docker stop container_$i
docker rm container_$i
done
This script starts 5 containers, checks their logs, stops them, and removes them one by one.
Look for repeated actions in the code.
- Primary operation: Loop running Docker commands for each container.
- How many times: 5 times (once per container).
As the number of containers (n) increases, the commands run once per container.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 sets of start, log, stop, remove |
| 100 | 100 sets of start, log, stop, remove |
| 1000 | 1000 sets of start, log, stop, remove |
Pattern observation: The total work grows directly with the number of containers.
Time Complexity: O(n)
This means the time to manage containers grows in a straight line as you add more containers.
[X] Wrong: "Starting more containers takes the same time as starting one container."
[OK] Correct: Each container requires its own start, log, stop, and remove steps, so time adds up with more containers.
Understanding how container operations scale helps you explain system behavior clearly and shows you grasp practical workload management.
"What if we ran all containers in parallel instead of one after another? How would the time complexity change?"