Why server-side programming matters in PostgreSQL - Performance Analysis
We want to understand how the time it takes to run server-side database code changes as the amount of data grows.
How does the work done by the server grow when more data is involved?
Analyze the time complexity of the following code snippet.
-- Select all users who joined after a certain date
SELECT *
FROM users
WHERE join_date > '2023-01-01';
-- Count how many such users exist
SELECT COUNT(*)
FROM users
WHERE join_date > '2023-01-01';
This code fetches users who joined after January 1, 2023, and counts them.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning the users table rows to check the join_date condition.
- How many times: Once for each user row in the table.
As the number of users grows, the server must check more rows to find those who joined after the date.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 row checks |
| 100 | 100 row checks |
| 1000 | 1000 row checks |
Pattern observation: The work grows directly with the number of users; double the users means double the checks.
Time Complexity: O(n)
This means the time to run the query grows in a straight line with the number of rows in the users table.
[X] Wrong: "The query time stays the same no matter how many users there are."
[OK] Correct: The server must check each row to see if it matches the condition, so more rows mean more work.
Understanding how server-side queries scale helps you write efficient code and explain your reasoning clearly in real-world situations.
"What if we added an index on join_date? How would the time complexity change?"