Batch writes in Firebase - Time & Space Complexity
When using batch writes in Firebase, it's important to know how the time to complete changes as you add more writes.
We want to understand how the number of write operations affects the total time taken.
Analyze the time complexity of the following batch write operation.
const batch = firestore.batch();
const docs = [doc1, doc2, doc3, /* ... */ docN];
for (const doc of docs) {
batch.set(doc.ref, doc.data());
}
await batch.commit();
This code groups multiple document writes into one batch and sends them together to Firebase.
Look at what repeats as the input grows.
- Primary operation: Adding each write to the batch with
batch.set(). - How many times: Once for each document to write (n times).
- Dominant operation: The
batch.commit()sends all writes at once, but the batch size affects the total work.
As you add more documents, the number of write operations grows directly with the number of documents.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 writes added, 1 batch commit |
| 100 | 100 writes added, 1 batch commit |
| 1000 | 1000 writes added, 1 batch commit |
Pattern observation: The number of write additions grows linearly with input size, but the commit is a single call.
Time Complexity: O(n)
This means the total time grows roughly in direct proportion to the number of writes you add to the batch.
[X] Wrong: "Batch writes always take the same time no matter how many writes are included."
[OK] Correct: While the commit is one call, adding more writes means more work inside the batch, so time grows with the number of writes.
Understanding how batch writes scale helps you design efficient data updates and shows you can reason about cloud operation costs clearly.
What if we split a large batch into multiple smaller batches? How would that affect the time complexity?