Why Edge Functions handle server-side logic in Supabase - Performance Analysis
We want to understand how the work done by Edge Functions grows as we handle more requests or data.
How does the time to run server-side logic change when the input size changes?
Analyze the time complexity of an Edge Function processing multiple database queries.
const { data, error } = await supabase
.from('items')
.select('*')
.eq('user_id', userId)
for (const item of data) {
await supabase
.from('details')
.select('*')
.eq('item_id', item.id)
}
This code fetches items for a user, then for each item fetches its details.
Look at the calls that happen multiple times.
- Primary operation: Database query inside the loop fetching details for each item.
- How many times: Once for each item returned in the first query.
As the number of items grows, the number of detail queries grows too.
| Input Size (n items) | Approx. API Calls/Operations |
|---|---|
| 10 | 1 (items) + 10 (details) = 11 |
| 100 | 1 + 100 = 101 |
| 1000 | 1 + 1000 = 1001 |
Pattern observation: The total calls grow roughly in direct proportion to the number of items.
Time Complexity: O(n)
This means the time grows linearly with the number of items processed.
[X] Wrong: "The number of API calls stays the same no matter how many items there are."
[OK] Correct: Each item causes a separate query, so more items mean more calls.
Understanding how server-side logic scales helps you design efficient cloud functions that handle growing data smoothly.
"What if we combined all detail queries into one batch call? How would the time complexity change?"