Keys (primary, candidate, foreign, super) in DBMS Theory - Time & Space Complexity
When working with database keys, it's important to understand how the time to check or enforce these keys grows as the data grows.
We want to know how the cost of operations like searching or validating keys changes with more records.
Analyze the time complexity of checking a primary key constraint during data insertion.
-- Assume a table with a primary key column
INSERT INTO Employees (EmployeeID, Name) VALUES (123, 'Alice');
-- The DBMS checks if EmployeeID 123 already exists
SELECT EmployeeID FROM Employees WHERE EmployeeID = 123;
-- If exists, reject insert; else, insert the new record
This snippet shows the DBMS verifying uniqueness of a primary key before inserting a new record.
Look at what the DBMS does repeatedly when inserting data.
- Primary operation: Searching the existing records for the key value.
- How many times: Once per insert, but the search scans the data or index.
As the number of records grows, the search to check key uniqueness takes more time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: Without an index, the search grows linearly with the number of records.
Time Complexity: O(n)
This means the time to check key uniqueness grows directly with the number of records in the table.
[X] Wrong: "Checking a primary key is always instant no matter how big the table is."
[OK] Correct: Without an index, the database must scan records one by one, so more data means more time.
Understanding how key checks scale helps you explain database performance and design better data models.
What if the primary key column has an index? How would the time complexity of checking uniqueness change?