$group stage for aggregation in MongoDB - Time & Space Complexity
When using the $group stage in MongoDB aggregation, we want to know how the time to run the operation changes as the data grows.
We ask: How does grouping many documents affect the work MongoDB does?
Analyze the time complexity of the following code snippet.
db.orders.aggregate([
{ $group: {
_id: "$customerId",
totalAmount: { $sum: "$amount" }
}
}
])
This groups orders by customer ID and sums the amount spent by each customer.
Look for repeated work inside the aggregation.
- Primary operation: Scanning each document once to assign it to a group.
- How many times: Once per document in the collection.
As the number of documents grows, the work grows roughly the same amount.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 document checks and group assignments |
| 100 | About 100 document checks and group assignments |
| 1000 | About 1000 document checks and group assignments |
Pattern observation: The work grows linearly with the number of documents.
Time Complexity: O(n)
This means the time to group grows directly with the number of documents processed.
[X] Wrong: "Grouping always takes the same time no matter how many documents there are."
[OK] Correct: Grouping must look at each document to decide its group, so more documents mean more work.
Understanding how grouping scales helps you explain performance in real projects and shows you can think about data size effects clearly.
"What if we added a $match stage before $group to filter documents? How would that affect the time complexity?"