CTE with INSERT, UPDATE, DELETE (writable CTEs) in PostgreSQL - Time & Space Complexity
When using writable CTEs in PostgreSQL, we want to know how the time to run the query changes as the data grows.
We ask: How does the number of rows affect the work done by INSERT, UPDATE, or DELETE inside a CTE?
Analyze the time complexity of this writable CTE example.
WITH updated_rows AS (
UPDATE employees
SET salary = salary * 1.1
WHERE department = 'Sales'
RETURNING *
)
DELETE FROM employees
WHERE id IN (SELECT id FROM updated_rows WHERE salary > 100000);
This code first increases salaries for Sales employees, then deletes those whose salary exceeds 100,000.
Look for repeated work inside the query.
- Primary operation: Scanning and updating rows in the employees table matching the department condition.
- How many times: Each matching row is processed once during UPDATE, then some rows are checked again for DELETE.
As the number of employees grows, the work grows roughly in proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 rows updated and checked |
| 100 | About 100 rows updated and checked |
| 1000 | About 1000 rows updated and checked |
Pattern observation: The operations grow linearly with the number of rows affected.
Time Complexity: O(n)
This means the time to run the query grows roughly in direct proportion to the number of rows involved.
[X] Wrong: "The DELETE inside the CTE runs separately and doubles the work for all rows."
[OK] Correct: The DELETE only processes rows returned by the UPDATE, so it works on a smaller subset, not the whole table again.
Understanding how writable CTEs scale helps you explain query performance clearly and shows you can reason about real database operations.
"What if the DELETE condition was changed to remove rows based on a different column not related to the UPDATE? How would that affect time complexity?"