Demand paging in Operating Systems - Time & Space Complexity
When using demand paging, the system loads pages only when needed. We want to understand how the time to access memory changes as the number of pages grows.
How does the number of page accesses affect the total time spent loading pages?
Analyze the time complexity of the following simplified demand paging process.
for each memory_access in program:
if page not in memory:
load page from disk
access data in page
This code checks if a page is in memory for each access and loads it if missing, then accesses the data.
Look for repeated actions that affect time.
- Primary operation: Checking and loading pages for each memory access.
- How many times: Once per memory access in the program.
As the number of memory accesses increases, the system may need to load more pages, which takes extra time.
| Input Size (memory accesses) | Approx. Operations (page checks and loads) |
|---|---|
| 10 | Up to 10 page checks and some page loads |
| 100 | Up to 100 page checks and more page loads |
| 1000 | Up to 1000 page checks and many page loads |
Pattern observation: The time grows roughly in direct proportion to the number of memory accesses, since each access may cause a page load.
Time Complexity: O(n)
This means the time to handle memory accesses grows linearly with the number of accesses, because each access may trigger a page load.
[X] Wrong: "Demand paging always takes constant time per access because pages are loaded only once."
[OK] Correct: The first time a page is accessed, loading it from disk takes extra time, so the total time depends on how many unique pages are accessed, not just the number of accesses.
Understanding how demand paging affects time helps you explain how operating systems manage memory efficiently. This skill shows you can think about how system design impacts performance.
What if the program accesses pages in a repeating pattern that fits entirely in memory? How would the time complexity change?