Why transactions ensure data consistency in DBMS Theory - Performance Analysis
We want to understand how the time taken by transactions changes as the amount of data or operations grows.
Specifically, how transactions keep data consistent while running multiple steps.
Analyze the time complexity of this transaction example.
BEGIN TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
This transaction moves money from one account to another, ensuring both updates happen together.
Look for repeated actions that affect time.
- Primary operation: Two update statements inside one transaction.
- How many times: Each update runs once per transaction.
As the number of accounts or transactions grows, the time grows roughly in proportion to the number of updates.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 transactions | 20 updates |
| 100 transactions | 200 updates |
| 1000 transactions | 2000 updates |
Pattern observation: Time grows linearly with the number of transactions because each transaction has a fixed number of steps.
Time Complexity: O(n)
This means the time to complete all transactions grows directly with how many transactions you run.
[X] Wrong: "Transactions always take the same time no matter how many operations they include."
[OK] Correct: More operations inside a transaction mean more work, so time grows with the number of steps.
Understanding how transaction time grows helps you explain database behavior clearly and shows you grasp how data consistency is maintained efficiently.
"What if the transaction included a loop updating 100 accounts instead of just two? How would the time complexity change?"