Work_mem and effective_cache_size tuning in PostgreSQL - Time & Space Complexity
When tuning PostgreSQL settings like work_mem and effective_cache_size, we want to understand how query execution time changes as data size grows.
We ask: How does memory allocation affect the speed of sorting and joining operations as input grows?
Analyze the time complexity impact of this query with memory settings:
SET work_mem = '4MB';
SET effective_cache_size = '128MB';
SELECT *
FROM orders
JOIN customers ON orders.customer_id = customers.id
ORDER BY orders.order_date DESC
LIMIT 100;
This query joins two tables and sorts orders by date, using memory settings to control sorting and caching.
Look at what repeats during query execution:
- Primary operation: Scanning and sorting rows from the
orderstable. - How many times: Once per row in
orders, plus matching rows incustomers.
As the number of rows in orders grows, the sorting work grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 log 10 (small sorting) |
| 100 | About 100 log 100 (more sorting work) |
| 1000 | About 1000 log 1000 (much more sorting) |
Pattern observation: Sorting work grows a bit faster than the number of rows, roughly proportional to n times log n.
Time Complexity: O(n log n)
This means the time to sort rows grows a little faster than the number of rows, because sorting compares items multiple times.
[X] Wrong: "Increasing work_mem always makes queries run in constant time regardless of data size."
[OK] Correct: More memory helps sorting fit in RAM, but sorting still needs to compare many rows, so time grows with data size.
Understanding how memory settings affect query time helps you explain performance tuning clearly and confidently in real situations.
"What if we increased work_mem enough to hold all rows in memory? How would the time complexity change?"