Record storage and page layout in DBMS Theory - Time & Space Complexity
When storing records in a database, how fast we can find or add data depends on how records are arranged on pages.
We want to understand how the time to access records grows as the number of records increases.
Analyze the time complexity of searching for a record in a page-based storage system.
-- Assume a page stores multiple records
-- To find a record, the system scans records in the page one by one
FOR each record IN page LOOP
IF record matches search_key THEN
RETURN record;
END IF;
END LOOP;
This code scans records sequentially on a page to find a matching record.
Look for repeated actions that affect time.
- Primary operation: Scanning each record in the page one by one.
- How many times: Up to the number of records in the page (n times).
As the number of records in a page grows, the time to find a record grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Up to 10 record checks |
| 100 | Up to 100 record checks |
| 1000 | Up to 1000 record checks |
Pattern observation: The time grows roughly in direct proportion to the number of records scanned.
Time Complexity: O(n)
This means the time to find a record grows linearly with the number of records on the page.
[X] Wrong: "Searching a record on a page always takes the same time no matter how many records are there."
[OK] Correct: Because the system checks records one by one, more records mean more checks and more time.
Understanding how record layout affects search time helps you explain database performance clearly and shows you grasp practical data storage concepts.
What if the records on a page were stored in a sorted order? How would that change the time complexity of searching?