| Users | Booking Requests/Second | Conflict Rate | Database Load | Locking/Queueing | Latency |
|---|---|---|---|---|---|
| 100 users | ~10 | Low | Single DB instance handles well | Minimal locking | Low latency |
| 10,000 users | ~1,000 | Moderate | DB under moderate load, some locks | Lock contention visible | Latency increases slightly |
| 1,000,000 users | ~100,000 | High | Single DB overloaded | High lock contention, queueing delays | High latency, timeouts possible |
| 100,000,000 users | ~10,000,000 | Very high | DB cluster needed, sharding required | Distributed locking or conflict resolution needed | Latency critical, fallback needed |
Booking conflict resolution in LLD - Scalability & System Analysis
The database is the first bottleneck because booking conflict resolution requires checking and updating availability atomically. As user requests grow, the DB faces high contention and locking delays, causing slow responses and possible failures.
- Optimistic Locking: Use version numbers or timestamps to detect conflicts and retry without heavy locks.
- Pessimistic Locking: Lock resources during booking to prevent conflicts but can reduce concurrency.
- Queueing Requests: Serialize booking requests per resource to avoid conflicts.
- Horizontal Scaling: Add more application servers behind load balancers to handle more requests.
- Database Sharding: Partition booking data by resource or region to reduce contention.
- Caching Availability: Use cache to reduce DB reads, but ensure cache consistency.
- Conflict Resolution Service: Use a dedicated service or distributed lock manager (e.g., Redis Redlock) to coordinate bookings.
- Eventual Consistency: For less critical bookings, allow temporary conflicts and resolve asynchronously.
- At 1,000 booking requests/sec, DB needs to handle ~1,000 QPS with atomic checks.
- Each booking record ~1 KB, 1M bookings/day -> ~1 GB storage/day.
- Network bandwidth depends on request size; assume 1 KB/request -> ~1 MB/s at 1,000 QPS.
- Locking and retries increase CPU usage on DB and app servers.
- Scaling DB horizontally and adding caching reduces cost per request.
Start by explaining the booking flow and where conflicts happen. Identify the database as the bottleneck due to atomicity needs. Discuss trade-offs between optimistic and pessimistic locking. Propose scaling by sharding and caching. Mention fallback strategies like queueing or eventual consistency. Keep answers structured: problem, bottleneck, solution, trade-offs.
Your database handles 1000 QPS for booking requests. Traffic grows 10x to 10,000 QPS. What do you do first and why?
Answer: The first step is to reduce database contention by implementing optimistic locking and adding read replicas for scaling reads. Also, consider sharding booking data to distribute load. This prevents the DB from becoming a bottleneck due to locking delays.