Why synchronization prevents data corruption in Operating Systems - Performance Analysis
When multiple processes access shared data, synchronization controls their access to avoid conflicts.
We want to understand how synchronization affects the number of operations as more processes try to access data.
Analyze the time complexity of this synchronization example.
lock.acquire()
shared_data = shared_data + 1
lock.release()
This code shows a process locking access before updating shared data, then unlocking after.
Look at what repeats when many processes run this code.
- Primary operation: acquiring and releasing the lock around data update
- How many times: once per process trying to update shared data
As more processes try to update, each must wait for the lock, so operations grow with the number of processes.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 lock acquire/release + 10 updates |
| 100 | 100 lock acquire/release + 100 updates |
| 1000 | 1000 lock acquire/release + 1000 updates |
Pattern observation: operations increase directly with the number of processes trying to update.
Time Complexity: O(n)
This means the total time grows linearly as more processes try to update the shared data safely.
[X] Wrong: "Synchronization makes updates instant and free of waiting."
[OK] Correct: Synchronization adds waiting time because processes must take turns, so it costs time proportional to the number of processes.
Understanding how synchronization affects operation growth helps you explain safe data access and performance trade-offs clearly.
What if we replaced the lock with a lock-free method? How would the time complexity change?