Why concurrency control prevents data corruption in DBMS Theory - Performance Analysis
Concurrency control manages how multiple database users access data at the same time.
We want to understand how the cost of managing this access grows as more users work together.
Analyze the time complexity of this concurrency control example using locking.
BEGIN TRANSACTION;
LOCK TABLE accounts IN EXCLUSIVE MODE;
UPDATE accounts SET balance = balance - 100 WHERE account_id = 1;
UPDATE accounts SET balance = balance + 100 WHERE account_id = 2;
COMMIT;
This code locks the accounts table to safely transfer money between two accounts.
Look for repeated actions that affect time cost.
- Primary operation: Locking the table to prevent others from changing data simultaneously.
- How many times: Once per transaction, but many transactions may wait if multiple users act.
As more users try to access the data, the waiting time can increase.
| Number of Users (n) | Approx. Waiting Operations |
|---|---|
| 10 | 9 waits if all want the lock |
| 100 | 99 waits, longer queue |
| 1000 | 999 waits, much longer delays |
Pattern observation: Waiting grows roughly in direct proportion to the number of users competing for the lock.
Time Complexity: O(n)
This means the time to complete transactions grows linearly as more users try to access the data at once.
[X] Wrong: "Locking always makes database operations slow regardless of user count."
[OK] Correct: Locking only causes delays when many users try to access the same data simultaneously; with few users, the impact is minimal.
Understanding how concurrency control affects performance helps you explain database behavior clearly and shows you grasp real-world system challenges.
What if we changed from table-level locking to row-level locking? How would the time complexity change?