0
0
Supabasecloud~5 mins

Backup and disaster recovery in Supabase - Time & Space Complexity

Choose your learning style9 modes available
Time Complexity: Backup and disaster recovery
O(n)
Understanding Time Complexity

When backing up data or recovering from a disaster, it's important to know how long these processes take as data grows.

We want to understand how the time to backup or restore changes when the amount of data increases.

Scenario Under Consideration

Analyze the time complexity of the following backup operation using Supabase.


// Export all rows from a table
const { data, error } = await supabase
  .from('orders')
  .select('*')

// Save data to backup storage
await saveToBackupStorage(data)
    

This sequence fetches all records from a database table and saves them as a backup.

Identify Repeating Operations

Look at what repeats as data grows:

  • Primary operation: Fetching all rows from the database table.
  • How many times: Once per backup, but the amount of data fetched grows with table size.
How Execution Grows With Input

As the number of rows increases, the amount of data fetched and saved grows proportionally.

Input Size (n rows)Approx. Data Fetched & Saved
10Small data, quick fetch and save
10010 times more data, takes longer
1000100 times more data, much longer

Pattern observation: Time grows roughly in direct proportion to the number of rows.

Final Time Complexity

Time Complexity: O(n)

This means the time to backup grows linearly with the amount of data.

Common Mistake

[X] Wrong: "Backing up twice the data only takes twice as many API calls, so it's always fast."

[OK] Correct: While API calls may be few, the data size transferred grows with data amount, which slows the process.

Interview Connect

Understanding how backup time grows helps you design systems that handle data safely and efficiently, a key skill in cloud roles.

Self-Check

What if we only backed up changed rows instead of all rows? How would the time complexity change?