Integrity constraints in DBMS Theory - Time & Space Complexity
When using integrity constraints in a database, it is important to understand how checking these rules affects performance.
We want to know how the time to verify constraints grows as the amount of data increases.
Analyze the time complexity of the following SQL constraint check.
-- Check uniqueness of a column before insert
SELECT COUNT(*) FROM Employees WHERE EmployeeID = new.EmployeeID;
-- If count is 0, allow insert; else reject
This code checks if a new employee ID already exists to keep IDs unique.
Look at what repeats when checking the constraint.
- Primary operation: Scanning the Employees table to find matching EmployeeID.
- How many times: Once per insert operation, but the scan may check many rows depending on data size.
As the number of employees grows, the time to check uniqueness grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The time grows roughly in direct proportion to the number of rows.
Time Complexity: O(n)
This means the time to check the constraint grows linearly with the number of rows in the table.
[X] Wrong: "Checking a constraint always takes the same time no matter how big the table is."
[OK] Correct: The database often needs to look through many rows to verify constraints, so more data usually means more work.
Understanding how constraints affect performance helps you design databases that stay fast as they grow.
What if we added an index on EmployeeID? How would the time complexity change?