Auto-commit behavior in SQL - Time & Space Complexity
When using auto-commit in SQL, each statement is saved immediately. We want to understand how this affects the time it takes as we run more statements.
How does the cost grow when many statements run with auto-commit on?
Analyze the time complexity of the following SQL statements with auto-commit enabled.
-- Auto-commit is ON by default
INSERT INTO orders (order_id, product) VALUES (1, 'Book');
UPDATE orders SET product = 'Notebook' WHERE order_id = 1;
DELETE FROM orders WHERE order_id = 1;
-- Each statement commits immediately
This code runs three separate statements, each saved right after it finishes.
Look at what repeats as we run more statements with auto-commit.
- Primary operation: Executing and committing each SQL statement one by one.
- How many times: Once per statement, so if there are n statements, commit happens n times.
Each statement causes a commit, so as the number of statements grows, the total work grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 commits |
| 100 | 100 commits |
| 1000 | 1000 commits |
Pattern observation: The total work grows directly with the number of statements because each commits separately.
Time Complexity: O(n)
This means the total time grows in a straight line as you run more statements with auto-commit on.
[X] Wrong: "Auto-commit groups all statements together, so the time stays the same no matter how many statements run."
[OK] Correct: Auto-commit saves after each statement, so the time adds up with every statement, not just once.
Understanding how auto-commit affects time helps you explain database behavior clearly and shows you think about how systems work under the hood.
"What if we turned off auto-commit and committed only once after all statements? How would the time complexity change?"