Optimistic concurrency control in DBMS Theory - Time & Space Complexity
We want to understand how the time cost of optimistic concurrency control changes as more transactions run.
How does the system handle many users trying to update data at the same time?
Analyze the time complexity of the following optimistic concurrency control steps.
BEGIN TRANSACTION;
READ data;
-- user works on data locally --
VALIDATE that data was not changed by others;
IF validation passes THEN
COMMIT changes;
ELSE
ROLLBACK and retry;
END IF;
This code shows how a transaction reads data, checks for conflicts before saving, and retries if needed.
Look for repeated checks or retries that affect time.
- Primary operation: Validation step checking for conflicts.
- How many times: Depends on number of retries if conflicts occur.
As more transactions run, conflicts may increase, causing more retries.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 validations, few retries |
| 100 | About 100 validations, some retries |
| 1000 | About 1000 validations, more retries possible |
Pattern observation: More transactions mean more validations and possible retries, increasing total work.
Time Complexity: O(n)
This means the time grows roughly in direct proportion to the number of transactions trying to commit.
[X] Wrong: "Validation always takes constant time regardless of transactions."
[OK] Correct: While a single validation is typically constant time, the overall time grows with concurrent transactions due to increased retries from conflicts.
Understanding how optimistic concurrency control scales helps you explain database behavior under load and design better systems.
"What if the system used pessimistic locking instead? How would the time complexity change?"