Imagine a popular online booking system where many users try to book the same slot simultaneously. Why is concurrency control crucial in this scenario?
Think about what happens if two people book the same slot at the exact same time.
Concurrency control prevents race conditions where two or more users might book the same slot simultaneously, causing double booking and inconsistent data.
In a booking system, which component is primarily responsible for managing the availability of slots and preventing conflicts?
Think about where the actual data about slot availability is stored and controlled.
The database manages slot availability using locks or transactions to ensure that no two bookings can reserve the same slot simultaneously.
You need to design a booking system that can handle thousands of concurrent booking requests per second without double booking. Which approach best supports this requirement?
Think about how to coordinate access to shared resources across many servers.
Distributed locking or centralized coordination ensures that concurrent requests are serialized properly, preventing double bookings even at scale.
Pessimistic locking locks a slot during booking to prevent others from booking it simultaneously. What is a key tradeoff of this approach?
Consider what happens when many users try to book the same slot and have to wait.
Pessimistic locking prevents conflicts but can reduce responsiveness because users must wait for locks to be released before proceeding.
A booking system uses optimistic concurrency control with version checks on each slot record. If the system can process 500 booking requests per second and the average retry rate due to conflicts is 20%, what is the maximum number of concurrent booking requests the system can handle without significant delays?
Calculate effective throughput after accounting for retries.
With 20% retries, effective throughput is 500 / 1.2 ≈ 416 requests per second, so max concurrency is about 400 requests.