Batch limits and best practices in Firebase - Time & Space Complexity
When working with Firebase batch operations, it is important to understand how the number of operations affects execution time.
We want to know how the time grows as we add more writes to a batch.
Analyze the time complexity of this batch write operation.
const batch = firestore.batch();
const docs = await firestore.collection('users').listDocuments();
for (const doc of docs) {
batch.update(doc, { active: true });
}
await batch.commit();
This code updates many user documents in a single batch write.
Look at what repeats as the batch size grows.
- Primary operation: Adding update commands to the batch for each document.
- How many times: Once per document in the collection.
- Dominant operation: The batch.commit() sends all updates at once, but the preparation grows with the number of documents.
As you add more documents, the number of update commands grows directly with the number of documents.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 update commands + 1 commit |
| 100 | 100 update commands + 1 commit |
| 1000 | 1000 update commands + 1 commit |
Pattern observation: The number of update commands grows linearly with the number of documents.
Time Complexity: O(n)
This means the time to prepare and send the batch grows directly with the number of documents you update.
[X] Wrong: "Batch writes always take the same time no matter how many updates are inside."
[OK] Correct: Each update adds work to the batch, so more updates mean more time to prepare and send.
Understanding how batch operations scale helps you design efficient data updates and avoid hitting limits in real projects.
"What if we split a large batch into multiple smaller batches? How would that affect the time complexity?"