$slice modifier with $push in MongoDB - Time & Space Complexity
When using the $slice modifier with $push in MongoDB, it's important to understand how the operation's time grows as the array size changes.
We want to know how the cost of adding and trimming elements scales with the array length.
Analyze the time complexity of the following MongoDB update operation.
db.collection.updateOne(
{ _id: 1 },
{ $push: { scores: { $each: [90], $slice: -5 } } }
)
This code adds a new score to the scores array and keeps only the last 5 elements.
Look for repeated work inside the update.
- Primary operation: Trimming the array to keep only the last 5 elements after pushing.
- How many times: The trimming checks elements up to the array length, which can vary.
As the array grows larger, trimming to keep only 5 elements means scanning more elements before cutting.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks to trim |
| 100 | About 100 checks to trim |
| 1000 | About 1000 checks to trim |
Pattern observation: The trimming work grows roughly in direct proportion to the array size.
Time Complexity: O(n)
This means the time to push and slice grows linearly with the size of the array.
[X] Wrong: "Using $slice with $push always runs in constant time because it only keeps a fixed number of elements."
[OK] Correct: Even though the final array size is fixed, MongoDB must scan the whole array to trim it, so the time depends on the current array length.
Understanding how array updates scale helps you explain performance trade-offs clearly and shows you know how database operations behave with growing data.
What if we replaced $slice with a fixed array size limit in the schema? How would that affect the time complexity?