Inter-process communication (pipes, shared memory) in Operating Systems - Time & Space Complexity
When processes talk to each other using pipes or shared memory, the time it takes depends on how much data they exchange.
We want to understand how the time grows as the amount of data increases.
Analyze the time complexity of the following code snippet.
int fd[2];
pipe(fd); // create pipe
char buffer[SIZE];
for (int i = 0; i < SIZE; i++) {
buffer[i] = 'a';
}
write(fd[1], buffer, SIZE); // write data to pipe
read(fd[0], buffer, SIZE); // read data from pipe
This code creates a pipe, fills a buffer, then writes and reads data through the pipe.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop filling the buffer and the write/read calls that transfer data.
- How many times: The loop runs SIZE times; write and read handle SIZE bytes.
As the buffer size grows, the time to fill, write, and read grows proportionally.
| Input Size (SIZE) | Approx. Operations |
|---|---|
| 10 | About 10 steps to fill + 10 to write + 10 to read = 30 steps |
| 100 | About 100 + 100 + 100 = 300 steps |
| 1000 | About 1000 + 1000 + 1000 = 3000 steps |
Pattern observation: The total work grows directly with the size of data; doubling data roughly doubles the time.
Time Complexity: O(n)
This means the time to communicate grows in a straight line with the amount of data sent.
[X] Wrong: "Sending data through a pipe or shared memory always takes constant time regardless of size."
[OK] Correct: Actually, the time depends on how much data is sent; bigger data means more time to copy and transfer.
Understanding how communication time grows helps you design efficient programs and explain system behavior clearly.
"What if we used multiple smaller writes instead of one big write? How would the time complexity change?"