Copy-on-write technique in Operating Systems - Time & Space Complexity
Analyzing time complexity helps us understand how the copy-on-write technique affects system performance as data size grows.
We want to know how the number of operations changes when copying memory using this technique.
Analyze the time complexity of the following simplified copy-on-write process.
function copyOnWrite(originalPage):
if originalPage is shared:
create newPage
copy data from originalPage to newPage
update reference to newPage
else:
use originalPage directly
return page
This code copies memory only when a shared page is modified, otherwise it reuses the original page.
Look for operations that repeat as input size grows.
- Primary operation: copying data from the original page to a new page.
- How many times: once per write to a shared page, and the copy size depends on the page size.
The time to copy grows with the size of the page being copied, but only when a write occurs.
| Input Size (page size in KB) | Approx. Operations (copying data) |
|---|---|
| 4 | 4 units of copy work |
| 16 | 16 units of copy work |
| 64 | 64 units of copy work |
Pattern observation: The copying work grows linearly with the page size, but only happens on writes, not on reads.
Time Complexity: O(n)
This means the time to copy data grows directly with the size of the memory page being copied.
[X] Wrong: "Copy-on-write always copies data immediately when a copy is made."
[OK] Correct: Copy-on-write delays copying until the data is actually modified, saving time and resources on simple copies.
Understanding copy-on-write time complexity shows you can reason about efficient memory use and performance, a valuable skill in system design discussions.
What if the system used smaller page sizes for copy-on-write? How would that affect the time complexity?