Firestore document model in GCP - Time & Space Complexity
We want to understand how the time to access or write data in Firestore changes as we add more documents.
Specifically, how does the number of documents affect the speed of operations?
Analyze the time complexity of reading and writing documents in Firestore.
// Firestore example
const docRef = firestore.collection('users').doc(userId);
// Write data to a document
await docRef.set({ name: 'Alice', age: 30 });
// Read data from a document
const docSnap = await docRef.get();
if (docSnap.exists) {
console.log(docSnap.data());
}
This sequence writes data to a single document and then reads it back.
Look at the main operations Firestore performs here.
- Primary operation: Reading or writing a single document by its ID.
- How many times: Once per document access.
Accessing a document by its ID takes about the same time no matter how many documents exist.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 1 read or write operation |
| 100 | 1 read or write operation |
| 1000 | 1 read or write operation |
Pattern observation: The time stays roughly the same regardless of the total number of documents.
Time Complexity: O(1)
This means accessing or writing a single document takes constant time, no matter how many documents are stored.
[X] Wrong: "Reading a document will take longer as the database grows because there are more documents to search through."
[OK] Correct: Firestore uses document IDs as keys, so it directly locates documents without scanning all documents.
Understanding how Firestore handles document access helps you explain efficient data retrieval in cloud databases.
This skill shows you know how cloud services manage data at scale.
"What if we query documents by a field value instead of by document ID? How would the time complexity change?"