Race condition problem in Operating Systems - Time & Space Complexity
When multiple processes access shared data without proper control, a race condition can occur.
We want to understand how the time to detect or handle this problem grows as more processes are involved.
Analyze the time complexity of the following code snippet.
// Shared variable
int counter = 0;
// Process function
void increment() {
int temp = counter; // Read
temp = temp + 1; // Modify
counter = temp; // Write
}
// Multiple processes call increment concurrently
This code shows multiple processes incrementing a shared counter without synchronization, leading to a race condition.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Each process reads, modifies, and writes the shared variable.
- How many times: Once per process, but processes run concurrently and may interfere.
As the number of processes increases, the chance of conflicting access grows, causing more retries or errors.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 increments, some conflicts possible |
| 100 | About 100 increments, conflicts more frequent |
| 1000 | About 1000 increments, many conflicts causing delays |
Pattern observation: The number of conflicting operations grows roughly with the number of processes, increasing the time to complete all increments.
Time Complexity: O(n)
This means the time to safely complete all increments grows linearly with the number of processes involved.
[X] Wrong: "Since each process only does a few steps, the total time stays constant no matter how many processes run."
[OK] Correct: Without control, processes interfere, causing retries or errors that add time as more processes run.
Understanding how race conditions affect execution time helps you design safer programs and explain concurrency challenges clearly.
"What if we add locks to control access? How would the time complexity change?"