Firestore collections and documents in GCP - Time & Space Complexity
When working with Firestore, it's important to understand how the time to access data grows as you add more collections and documents.
We want to know how the number of operations changes when we read or write many documents.
Analyze the time complexity of reading multiple documents from a Firestore collection.
// Fetch all documents from a collection
const collectionRef = firestore.collection('users');
const snapshot = await collectionRef.get();
snapshot.forEach(doc => {
console.log(doc.id, '=>', doc.data());
});
This code fetches all documents inside the 'users' collection and processes each one.
Look at what repeats when fetching documents:
- Primary operation: Reading each document from the collection.
- How many times: Once per document in the collection.
As the number of documents grows, the number of reads grows too.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 document reads |
| 100 | 100 document reads |
| 1000 | 1000 document reads |
Pattern observation: The number of reads grows directly with the number of documents.
Time Complexity: O(n)
This means the time to read documents grows linearly with how many documents you have.
[X] Wrong: "Reading a collection is always a single fast operation regardless of size."
[OK] Correct: Firestore reads each document separately, so more documents mean more reads and more time.
Understanding how Firestore scales with data size helps you design efficient queries and avoid slow operations in real projects.
"What if we added a filter to only read documents matching a condition? How would the time complexity change?"