Container resource usage stats in Docker - Time & Space Complexity
We want to understand how the time to get resource stats from containers changes as the number of containers grows.
How does checking many containers affect the time it takes to get their usage data?
Analyze the time complexity of the following Docker command sequence.
docker stats --no-stream --format "{{.Container}} {{.CPUPerc}} {{.MemUsage}}"
This command fetches the current CPU and memory usage for all running containers once.
Look for repeated actions in the command's process.
- Primary operation: Gathering stats for each running container.
- How many times: Once per container running on the system.
As the number of containers increases, the command must collect stats from each one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 stats collections |
| 100 | 100 stats collections |
| 1000 | 1000 stats collections |
Pattern observation: The time grows directly with the number of containers.
Time Complexity: O(n)
This means the time to get stats increases linearly as you add more containers.
[X] Wrong: "Getting stats for many containers takes the same time as for one container."
[OK] Correct: Each container adds work because stats must be collected separately, so time grows with container count.
Understanding how commands scale with input size helps you predict performance and design better monitoring tools.
"What if we removed the --no-stream option to continuously stream stats instead? How would the time complexity change?"