Monitoring with Atlas metrics in MongoDB - Time & Space Complexity
When monitoring MongoDB with Atlas metrics, we want to understand how the cost of gathering and processing metrics changes as the amount of data grows.
We ask: How does the time to collect and analyze metrics scale with the number of monitored database operations?
Analyze the time complexity of this MongoDB aggregation to get operation counts per collection.
db.system.profile.aggregate([
{ $match: { ns: { $exists: true } } },
{ $group: { _id: "$ns", count: { $sum: 1 } } },
{ $sort: { count: -1 } }
])
This code groups profiling data by namespace (collection) and counts operations per collection, then sorts by count.
Look for repeated work done on data.
- Primary operation: Scanning all profiling documents to group by collection.
- How many times: Once per profiling document, which grows with the number of operations logged.
As the number of profiling documents increases, the aggregation must process each one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 document reads and group updates |
| 100 | About 100 document reads and group updates |
| 1000 | About 1000 document reads and group updates |
Pattern observation: The work grows roughly in direct proportion to the number of profiling entries.
Time Complexity: O(n)
This means the time to process metrics grows linearly with the number of profiling records.
[X] Wrong: "The aggregation runs instantly no matter how many profiling entries exist."
[OK] Correct: Each profiling document must be read and processed, so more data means more work and longer time.
Understanding how monitoring queries scale helps you design efficient dashboards and alerts that stay fast as your data grows.
What if we added an index on the "ns" field? How would that affect the time complexity of this aggregation?