Inserting and querying data in Supabase - Time & Space Complexity
When working with Supabase, it's important to know how the time to insert and get data changes as you add more records.
We want to understand how the number of operations grows when inserting and querying data.
Analyze the time complexity of the following operation sequence.
// Insert multiple rows
for (let i = 0; i < n; i++) {
await supabase
.from('items')
.insert([{ name: `Item ${i}`, value: i }])
}
// Query all rows
const { data, error } = await supabase
.from('items')
.select('*')
This code inserts n rows one by one, then fetches all rows from the table.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Insert API call inside the loop, repeated n times.
- How many times: Insert call runs once per item, so n times.
- Query operation: One select call fetching all data once after inserts.
Each insert call happens separately, so as n grows, the number of insert calls grows the same way.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 inserts + 1 query = 11 calls |
| 100 | 100 inserts + 1 query = 101 calls |
| 1000 | 1000 inserts + 1 query = 1001 calls |
Pattern observation: The number of insert calls grows directly with n, while the query call stays constant.
Time Complexity: O(n)
This means the total time grows roughly in direct proportion to the number of items inserted.
[X] Wrong: "Inserting many rows one by one is as fast as inserting them all at once."
[OK] Correct: Each insert call takes time and network effort, so doing many calls adds up and slows things down.
Understanding how your data operations scale helps you design better apps and answer questions clearly in interviews.
"What if we inserted all rows in a single batch insert call? How would the time complexity change?"