Why optimization prevents slow queries in Supabase - Performance Analysis
When using Supabase to get data, some queries can be slow if they do too much work. We want to understand why optimizing these queries helps keep them fast.
How does the amount of work grow when we ask for more data?
Analyze the time complexity of a query that fetches user data with a filter and sorting.
const { data, error } = await supabase
.from('users')
.select('*')
.eq('status', 'active')
.order('created_at', { ascending: false })
.limit(100)
This query gets up to 100 active users sorted by newest first.
Look at what happens when this query runs:
- Primary operation: The database scans the users table to find matching rows.
- How many times: Once per query, but the scan work depends on how many users exist.
As the number of users grows, the database must check more rows to find the active ones and sort them.
| Input Size (n) | Approx. Rows Checked |
|---|---|
| 10 | 10 |
| 100 | 100 |
| 1000 | 1000 |
Pattern observation: The work grows roughly in direct proportion to the number of users in the table.
Time Complexity: O(n)
This means the time to run the query grows linearly as the number of users grows.
[X] Wrong: "Adding a filter always makes the query faster."
[OK] Correct: Without proper indexes, the database still scans all rows to apply the filter, so the query can remain slow.
Understanding how query time grows helps you explain why indexes and query design matter. This skill shows you can think about real database performance, which is useful in many cloud jobs.
"What if we add an index on the 'status' column? How would the time complexity change?"