Selection operation implementation in DBMS Theory - Time & Space Complexity
When we run a selection operation in a database, we want to find rows that match a condition.
We ask: How does the time to find these rows grow as the table gets bigger?
Analyze the time complexity of the following SQL query execution.
SELECT *
FROM Employees
WHERE Department = 'Sales';
This query looks through the Employees table to find all rows where the Department is 'Sales'.
In this selection operation:
- Primary operation: Checking each row's Department value.
- How many times: Once for every row in the Employees table.
As the number of rows grows, the database checks more rows one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The number of checks grows directly with the number of rows.
Time Complexity: O(n)
This means the time to find matching rows grows in a straight line as the table gets bigger.
[X] Wrong: "The database only looks at a few rows, so time stays the same no matter the table size."
[OK] Correct: Without an index, the database must check every row to be sure it finds all matches.
Understanding how selection scales helps you explain database performance clearly and shows you know how queries behave with growing data.
"What if the Department column had an index? How would the time complexity change?"