Memory-mapped files in Operating Systems - Time & Space Complexity
When using memory-mapped files, it is important to understand how the time to access data grows as the file size increases.
We want to know how the system's work changes when reading or writing larger files through memory mapping.
Analyze the time complexity of accessing data using memory-mapped files.
// Pseudocode for memory-mapped file access
map = mmap(file, size)
for i in 0 to size-1:
data = map[i]
process(data)
munmap(map)
This code maps a file into memory, then reads each byte sequentially to process it.
Look for repeated actions that affect performance.
- Primary operation: Loop reading each byte from the mapped memory.
- How many times: Once for every byte in the file (size times).
As the file size grows, the number of bytes to read grows too, so the work grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 reads and processes |
| 100 | 100 reads and processes |
| 1000 | 1000 reads and processes |
Pattern observation: The work grows directly with the file size; doubling the file doubles the work.
Time Complexity: O(n)
This means the time to read and process the file grows in direct proportion to the file size.
[X] Wrong: "Memory-mapped files always make access instant, so time does not grow with file size."
[OK] Correct: While memory mapping avoids some copying, reading each byte still takes time proportional to the file size because each byte must be accessed and processed.
Understanding how memory-mapped files scale helps you explain efficient file access in real systems and shows you can reason about performance in practical scenarios.
What if we only accessed every 10th byte instead of every byte? How would the time complexity change?