Container metrics collection in Docker - Time & Space Complexity
When collecting container metrics, we want to know how the time to gather data changes as the number of containers grows.
We ask: How does the work increase when more containers run?
Analyze the time complexity of the following code snippet.
#!/bin/bash
containers=$(docker ps -q)
for container in $containers; do
docker stats --no-stream $container > metrics/$container.txt
done
wait
This script lists all running containers, then collects metrics for each one individually and saves them to files.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop over each container to run a metrics command.
- How many times: Once per container running on the system.
As the number of containers increases, the script runs more commands, one per container.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 metrics commands |
| 100 | 100 metrics commands |
| 1000 | 1000 metrics commands |
Pattern observation: The work grows directly with the number of containers.
Time Complexity: O(n)
This means the time to collect metrics grows in a straight line as more containers run.
[X] Wrong: "Collecting metrics for many containers takes the same time as for one container."
[OK] Correct: Each container adds extra work, so time increases with the number of containers.
Understanding how work grows with container count helps you design efficient monitoring tools and shows you can think about scaling in real systems.
"What if we collected metrics for all containers in one combined command instead of one by one? How would the time complexity change?"