CRUD operations with supabase-js - Time & Space Complexity
When using supabase-js to create, read, update, or delete data, it's important to know how the time needed grows as you work with more data.
We want to understand how the number of operations changes when we handle more records.
Analyze the time complexity of the following operation sequence.
const { data, error } = await supabase
.from('items')
.select('*')
.eq('category', 'books')
.limit(n)
for (const item of data) {
await supabase
.from('items')
.update({ status: 'sold' })
.eq('id', item.id)
.single()
}
This code fetches a list of items in the 'books' category, then updates each one to mark it as sold.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: The update call inside the loop that runs once per item.
- How many times: Exactly n times, where n is the number of items fetched.
As you increase the number of items n, the number of update calls grows the same way.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 1 select + 10 updates = 11 calls |
| 100 | 1 select + 100 updates = 101 calls |
| 1000 | 1 select + 1000 updates = 1001 calls |
Pattern observation: The total calls increase roughly in a straight line with the number of items.
Time Complexity: O(n)
This means the time grows directly in proportion to how many items you update.
[X] Wrong: "The update calls inside the loop happen all at once, so time stays the same no matter how many items."
[OK] Correct: Each update waits for the previous one to finish, so the total time adds up with more items.
Understanding how your code scales with more data shows you can write efficient cloud operations, a key skill for real projects.
"What if we updated all items in a single batch call instead of one by one? How would the time complexity change?"