Data aggregation patterns in Firebase - Time & Space Complexity
When collecting data from many places in Firebase, it's important to know how the time to get all data grows as you add more items.
We want to understand how the number of data requests changes when we gather more data.
Analyze the time complexity of the following operation sequence.
const db = firebase.firestore();
const userIds = ["user1", "user2", "user3", /* ... */];
async function aggregateUserData() {
const results = [];
for (const id of userIds) {
const doc = await db.collection('users').doc(id).get();
results.push(doc.data());
}
return results;
}
This code fetches data for each user ID one by one from the Firestore database.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: One Firestore document read per user ID.
- How many times: Once for each user ID in the list.
Each additional user ID adds one more document read request.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 document reads |
| 100 | 100 document reads |
| 1000 | 1000 document reads |
Pattern observation: The number of reads grows directly with the number of user IDs.
Time Complexity: O(n)
This means the time to get all data grows in a straight line as you add more users.
[X] Wrong: "Fetching multiple documents at once always takes the same time as fetching one."
[OK] Correct: Each document read is a separate request, so more documents mean more time.
Understanding how data requests grow helps you design better apps and explain your choices clearly in conversations.
"What if we used a single query to fetch all user documents at once? How would the time complexity change?"