Capped collections for fixed-size data in MongoDB - Time & Space Complexity
When working with capped collections in MongoDB, it's important to understand how the time to insert or read data changes as the collection grows.
We want to see how the fixed size of capped collections affects the speed of operations.
Analyze the time complexity of inserting documents into a capped collection.
db.createCollection("logs", { capped: true, size: 100000 });
for (let i = 0; i < n; i++) {
db.logs.insertOne({ message: `Log entry ${i}`, timestamp: new Date() });
}
This code creates a capped collection with a fixed size and inserts n log entries into it.
Look at what repeats as we insert documents.
- Primary operation: Inserting one document into the capped collection.
- How many times: n times, once for each log entry.
As we add more entries, the collection stays the same size, so old entries get removed automatically.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 insert operations, each quick |
| 100 | 100 insert operations, still quick |
| 1000 | 1000 insert operations, speed stays steady |
Pattern observation: Each insert takes about the same time regardless of how many documents are already in the capped collection.
Time Complexity: O(n)
This means inserting n documents takes time proportional to n, with each insert operation being very fast and consistent.
[X] Wrong: "Inserting into a capped collection slows down as it fills up because it has to delete old data."
[OK] Correct: The capped collection is fixed size and uses a circular buffer, so insert speed stays constant no matter how full it is.
Understanding capped collections shows you know how fixed-size data storage works efficiently, a useful skill for real-world logging and data retention tasks.
"What if we changed the capped collection to a normal collection without size limits? How would the time complexity of inserts change as data grows?"