Mutex locks in Operating Systems - Time & Space Complexity
When using mutex locks, it is important to understand how the time to complete tasks changes as more threads try to access shared resources.
We want to know how locking affects the speed when many threads compete for the same lock.
Analyze the time complexity of the following code snippet.
mutex_lock(&lock);
// critical section code
for (int i = 0; i < n; i++) {
// perform operation
}
mutex_unlock(&lock);
This code locks a mutex before running a loop of n operations, then unlocks it after.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The loop inside the locked section runs n times.
- How many times: The loop runs once per lock acquisition, but the lock blocks other threads until it is released.
As n grows, the time spent inside the locked section grows linearly because the loop runs n times.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations inside the lock |
| 100 | About 100 operations inside the lock |
| 1000 | About 1000 operations inside the lock |
Pattern observation: The time inside the lock grows directly with n, so doubling n roughly doubles the time.
Time Complexity: O(n)
This means the time to complete the locked section grows in a straight line as the number of operations increases.
[X] Wrong: "Mutex locks add no delay regardless of how many threads wait."
[OK] Correct: When many threads wait for the lock, they queue up and wait their turn, which adds waiting time and slows overall progress.
Understanding how mutex locks affect time helps you explain thread synchronization and performance in real systems, a key skill for many roles.
What if the critical section code inside the lock was replaced with a constant-time operation? How would the time complexity change?