Contiguous allocation in Operating Systems - Time & Space Complexity
When managing files in an operating system, contiguous allocation stores files in one continuous block on disk.
We want to understand how the time to find or allocate space grows as file size or disk size changes.
Analyze the time complexity of the following code snippet.
int find_contiguous_space(int disk[], int size, int file_size) {
for (int i = 0; i < size; i++) {
int count = 0;
while (count < file_size && i + count < size && disk[i + count] == 0) {
count++;
}
if (count == file_size) {
return i; // start index found
}
}
return -1; // no space found
}
This code searches the disk array to find a continuous free space block large enough for the file.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The outer loop scans each disk block, and the inner while loop checks consecutive blocks for free space.
- How many times: The outer loop runs up to the disk size, and the inner loop runs up to the file size for each position.
As the disk size grows, the search checks more starting points. For each, it may check up to the file size blocks.
| Input Size (disk size n) | Approx. Operations |
|---|---|
| 10 | Up to 10 * file_size checks |
| 100 | Up to 100 * file_size checks |
| 1000 | Up to 1000 * file_size checks |
Pattern observation: The total checks grow roughly by multiplying disk size and file size.
Time Complexity: O(n * m)
This means the time to find space grows proportionally to the disk size times the file size.
[X] Wrong: "The search time depends only on the disk size, not the file size."
[OK] Correct: Because for each disk position, the code checks up to the file size blocks to confirm continuous free space.
Understanding how searching for continuous space scales helps you reason about file system efficiency and resource management.
What if the disk was represented as a linked list instead of an array? How would the time complexity change?