$facet for multiple pipelines in MongoDB - Time & Space Complexity
When using $facet in MongoDB, we run several pipelines at once on the same data. Understanding how this affects time helps us know how the work grows as data grows.
We want to see how the total work changes when we add more data or more pipelines.
Analyze the time complexity of the following code snippet.
db.collection.aggregate([
{
$facet: {
pipeline1: [ { $match: { status: "A" } }, { $count: "countA" } ],
pipeline2: [ { $match: { qty: { $gt: 50 } } }, { $group: { _id: "$item", total: { $sum: "$qty" } } } ]
}
}
])
This code runs two pipelines at the same time on the same collection: one counts documents with status "A", the other groups items with quantity over 50.
Look for repeated work inside the pipelines.
- Primary operation: Each pipeline scans the documents to filter and process them.
- How many times: Each pipeline processes all documents independently, so the scanning repeats for each pipeline.
As the number of documents grows, each pipeline must check more documents.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 2 x 10 = 20 operations |
| 100 | About 2 x 100 = 200 operations |
| 1000 | About 2 x 1000 = 2000 operations |
Pattern observation: The total work grows roughly linearly with data size, multiplied by the number of pipelines.
Time Complexity: O(k x n)
This means the work grows linearly with the number of documents n and the number of pipelines k inside $facet.
[X] Wrong: "Using $facet runs all pipelines in one pass, so it only scans data once no matter how many pipelines there are."
[OK] Correct: Each pipeline inside $facet runs independently, so the data is scanned separately for each pipeline, multiplying the work.
Knowing how $facet scales helps you explain performance trade-offs clearly. It shows you understand how MongoDB processes multiple tasks at once, a useful skill in real projects.
What if we added more pipelines inside $facet? How would the time complexity change?