Prometheus for Docker monitoring - Time & Space Complexity
When using Prometheus to monitor Docker containers, it's important to understand how the time to collect metrics grows as the number of containers increases.
We want to know how the monitoring process scales when more containers are added.
Analyze the time complexity of this Prometheus scrape configuration for Docker containers.
scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['container1:9100', 'container2:9100', 'container3:9100']
metrics_path: '/metrics'
scrape_interval: 15s
# Prometheus scrapes each container's metrics endpoint
# to collect monitoring data regularly.
This config tells Prometheus to collect metrics from each Docker container's metrics endpoint every 15 seconds.
Look at what repeats during the monitoring process.
- Primary operation: Prometheus sends HTTP requests to each container's metrics endpoint.
- How many times: Once per container every scrape interval (e.g., every 15 seconds).
As the number of containers increases, the number of HTTP requests Prometheus makes grows.
| Input Size (n containers) | Approx. Operations (HTTP requests) |
|---|---|
| 10 | 10 requests per scrape |
| 100 | 100 requests per scrape |
| 1000 | 1000 requests per scrape |
Pattern observation: The number of requests grows directly with the number of containers.
Time Complexity: O(n)
This means the time to collect metrics grows linearly as you add more containers to monitor.
[X] Wrong: "Adding more containers won't affect Prometheus scraping time much because it scrapes in parallel."
[OK] Correct: Even if scraping happens in parallel, each container adds a request that consumes resources and time, so total work still grows with container count.
Understanding how monitoring scales helps you design systems that stay reliable as they grow. This skill shows you can think about real-world system behavior, not just code.
What if Prometheus used a push model instead of scraping each container? How would the time complexity change?