Backup and disaster recovery in Supabase - Time & Space Complexity
When backing up data or recovering from a disaster, it's important to know how long these processes take as data grows.
We want to understand how the time to backup or restore changes when the amount of data increases.
Analyze the time complexity of the following backup operation using Supabase.
// Export all rows from a table
const { data, error } = await supabase
.from('orders')
.select('*')
// Save data to backup storage
await saveToBackupStorage(data)
This sequence fetches all records from a database table and saves them as a backup.
Look at what repeats as data grows:
- Primary operation: Fetching all rows from the database table.
- How many times: Once per backup, but the amount of data fetched grows with table size.
As the number of rows increases, the amount of data fetched and saved grows proportionally.
| Input Size (n rows) | Approx. Data Fetched & Saved |
|---|---|
| 10 | Small data, quick fetch and save |
| 100 | 10 times more data, takes longer |
| 1000 | 100 times more data, much longer |
Pattern observation: Time grows roughly in direct proportion to the number of rows.
Time Complexity: O(n)
This means the time to backup grows linearly with the amount of data.
[X] Wrong: "Backing up twice the data only takes twice as many API calls, so it's always fast."
[OK] Correct: While API calls may be few, the data size transferred grows with data amount, which slows the process.
Understanding how backup time grows helps you design systems that handle data safely and efficiently, a key skill in cloud roles.
What if we only backed up changed rows instead of all rows? How would the time complexity change?