estimatedDocumentCount for speed in MongoDB - Time & Space Complexity
We want to understand how fast MongoDB's estimatedDocumentCount runs as the collection grows.
How does the time to get a document count change when there are more documents?
Analyze the time complexity of the following code snippet.
const count = await db.collection('users').estimatedDocumentCount();
console.log('Estimated count:', count);
This code quickly gets an estimated number of documents in the 'users' collection without scanning all documents.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing collection metadata or internal statistics.
- How many times: Runs once, no loops over documents.
The operation reads metadata, so it does not grow much with more documents.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Few operations |
| 100 | Few operations |
| 1000 | Few operations |
Pattern observation: The time stays almost the same even if the collection grows.
Time Complexity: O(1)
This means the time to get the estimated count stays constant no matter how many documents there are.
[X] Wrong: "Getting the document count always scans every document."
[OK] Correct: estimatedDocumentCount() uses metadata, so it does not scan all documents and is much faster.
Knowing when a database operation runs fast regardless of data size shows you understand how databases optimize tasks behind the scenes.
"What if we used countDocuments() instead? How would the time complexity change?"