Why monitoring containers matters in Docker - Performance Analysis
Monitoring containers helps us see how much work they do over time.
We want to know how the cost of monitoring grows as we watch more containers.
Analyze the time complexity of the following Docker monitoring script.
#!/bin/bash
containers=$(docker ps -q)
for container in $containers; do
docker stats --no-stream --format "{{.Name}}: {{.CPUPerc}}" $container
sleep 1
done
This script lists all running containers and fetches their CPU usage once each.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop over each container to get CPU stats.
- How many times: Once per container running at the time.
As the number of containers grows, the script runs more commands.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 docker stats calls |
| 100 | 100 docker stats calls |
| 1000 | 1000 docker stats calls |
Pattern observation: The work grows directly with the number of containers.
Time Complexity: O(n)
This means the monitoring time grows linearly as you add more containers.
[X] Wrong: "Monitoring many containers takes the same time as monitoring one."
[OK] Correct: Each container adds extra work, so time grows with the number of containers.
Understanding how monitoring scales helps you design systems that stay fast as they grow.
"What if we monitored containers in parallel instead of one by one? How would the time complexity change?"