Semaphores (counting and binary) in Operating Systems - Time & Space Complexity
When using semaphores, it's important to understand how the time to complete operations changes as more processes try to access shared resources.
We want to know how the waiting and signaling actions scale when many processes compete.
Analyze the time complexity of the following semaphore operations.
semaphore.wait() // process tries to enter critical section
// critical section code
semaphore.signal() // process leaves critical section
This code shows a process using a semaphore to control access to a shared resource.
Look at what repeats when many processes use the semaphore.
- Primary operation: Each process calls wait() and signal() once per access.
- How many times: The number of processes trying to enter the critical section.
As more processes try to enter, each must wait if the semaphore is unavailable.
| Number of Processes (n) | Approx. Operations |
|---|---|
| 10 | About 10 wait and 10 signal calls |
| 100 | About 100 wait and 100 signal calls |
| 1000 | About 1000 wait and 1000 signal calls |
Pattern observation: The total semaphore operations grow directly with the number of processes.
Time Complexity: O(n)
This means the total semaphore operations increase in a straight line as more processes try to access the resource.
[X] Wrong: "Semaphore operations take constant time regardless of how many processes wait."
[OK] Correct: When many processes wait, the system must manage a queue, so waiting time and operations grow with the number of processes.
Understanding how semaphore operations scale helps you explain synchronization efficiency and resource management clearly in real-world systems.
What if we replaced the counting semaphore with a binary semaphore? How would the time complexity change when many processes compete?