Migrating from Realtime Database to Firestore in Firebase - Time & Space Complexity
When moving data from Realtime Database to Firestore, it's important to understand how the time to migrate grows as the data size grows.
We want to know how the number of operations changes as we move more data.
Analyze the time complexity of migrating data by reading from Realtime Database and writing to Firestore.
const rtdbRef = ref(database, 'users');
get(rtdbRef).then(snapshot => {
const users = snapshot.val();
for (const userId in users) {
const userData = users[userId];
set(doc(firestore, 'users', userId), userData);
}
});
This code reads all users from Realtime Database and writes each user as a document in Firestore.
Look at what happens repeatedly during migration.
- Primary operation: Reading all user data once, then writing each user to Firestore individually.
- How many times: One read operation for all users, then one write operation per user.
As the number of users grows, the number of writes grows too, but the read stays one big operation.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 1 read + 10 writes = 11 operations |
| 100 | 1 read + 100 writes = 101 operations |
| 1000 | 1 read + 1000 writes = 1001 operations |
Pattern observation: The total operations grow roughly in a straight line with the number of users.
Time Complexity: O(n)
This means the time to migrate grows directly in proportion to the number of users being moved.
[X] Wrong: "Reading all data once means the migration time stays the same no matter how many users there are."
[OK] Correct: While the read is one operation, each user requires a separate write to Firestore, so more users mean more writes and more time.
Understanding how data migration scales helps you plan and explain real-world cloud tasks clearly and confidently.
"What if we batch multiple user writes into a single Firestore batch write? How would the time complexity change?"