Why Postgres powers Supabase - Performance Analysis
We want to understand how the time it takes to use Supabase grows as we work with more data in Postgres.
Specifically, how does Postgres handle requests as data size increases?
Analyze the time complexity of querying data from a Postgres table via Supabase.
const { data, error } = await supabase
.from('users')
.select('*')
.eq('status', 'active')
.limit(100)
.order('created_at', { ascending: false })
This code fetches up to 100 active users ordered by creation date from Postgres through Supabase.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: One query request sent to Postgres via Supabase API.
- How many times: Once per data fetch; each fetch triggers one query.
As the number of rows in the users table grows, the time to find and return the requested data grows too.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 1 query scanning 10 rows |
| 1000 | 1 query scanning 1000 rows |
| 100000 | 1 query scanning 100000 rows |
Pattern observation: The number of API calls stays the same, but the work inside Postgres grows with data size.
Time Complexity: O(n)
This means the time to get results grows roughly in direct proportion to the number of rows scanned in the database.
[X] Wrong: "Fetching data from Supabase always takes the same time no matter how much data there is."
[OK] Correct: The query time depends on how much data Postgres must scan or index; more data usually means more work.
Understanding how database queries scale helps you design better apps and explain your choices clearly in conversations.
"What if we added an index on the 'status' column? How would the time complexity change?"