Row-level locking (FOR UPDATE, FOR SHARE) in PostgreSQL - Time & Space Complexity
When using row-level locking in a database, it is important to understand how the time to lock rows grows as the number of rows increases.
We want to know how the locking operation scales when more rows are involved.
Analyze the time complexity of the following PostgreSQL query using row-level locking.
SELECT * FROM orders
WHERE status = 'pending'
FOR UPDATE;
This query selects all rows with status 'pending' and locks them to prevent other transactions from modifying them until this transaction finishes.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning rows that match the condition and locking each one.
- How many times: Once for each matching row in the table.
As the number of rows with status 'pending' grows, the database must lock more rows one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Locks 10 rows |
| 100 | Locks 100 rows |
| 1000 | Locks 1000 rows |
Pattern observation: The number of locking operations grows directly with the number of rows locked.
Time Complexity: O(n)
This means the time to lock rows grows linearly with the number of rows matched and locked.
[X] Wrong: "Locking rows happens all at once, so time does not depend on how many rows are locked."
[OK] Correct: Each row must be locked individually, so more rows mean more work and more time.
Understanding how row-level locking scales helps you reason about database performance and concurrency control in real applications.
"What if we added an index on the status column? How would the time complexity of locking rows change?"