Cost optimization (read/write reduction) in Firebase - Time & Space Complexity
When using Firebase, each read or write operation costs money and time.
We want to understand how these costs grow as we do more operations.
Analyze the time complexity of the following operation sequence.
const db = firebase.firestore();
async function fetchUsers(userIds) {
const users = [];
for (const id of userIds) {
const doc = await db.collection('users').doc(id).get();
users.push(doc.data());
}
return users;
}
This code fetches user data for each user ID by reading documents one by one.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Reading a user document from Firestore.
- How many times: Once for each user ID in the input list.
Each user ID causes one read operation, so the total reads grow directly with the number of users.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 reads |
| 100 | 100 reads |
| 1000 | 1000 reads |
Pattern observation: The number of reads grows linearly as the input size increases.
Time Complexity: O(n)
This means the cost and time increase directly in proportion to how many user IDs we fetch.
[X] Wrong: "Fetching multiple users one by one is just as cheap as fetching them all at once."
[OK] Correct: Each separate read counts as a separate operation and adds cost and time, so doing many single reads is more expensive than batching.
Understanding how costs grow with reads and writes helps you design efficient Firebase apps that save money and run faster.
"What if we changed the code to fetch all user documents in a single batch request? How would the time complexity change?"