0
0
LLDsystem_design~10 mins

Booking conflict resolution in LLD - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Booking conflict resolution
Growth Table: Booking Conflict Resolution
UsersBooking Requests/SecondConflict RateDatabase LoadLocking/QueueingLatency
100 users~10LowSingle DB instance handles wellMinimal lockingLow latency
10,000 users~1,000ModerateDB under moderate load, some locksLock contention visibleLatency increases slightly
1,000,000 users~100,000HighSingle DB overloadedHigh lock contention, queueing delaysHigh latency, timeouts possible
100,000,000 users~10,000,000Very highDB cluster needed, sharding requiredDistributed locking or conflict resolution neededLatency critical, fallback needed
First Bottleneck

The database is the first bottleneck because booking conflict resolution requires checking and updating availability atomically. As user requests grow, the DB faces high contention and locking delays, causing slow responses and possible failures.

Scaling Solutions
  • Optimistic Locking: Use version numbers or timestamps to detect conflicts and retry without heavy locks.
  • Pessimistic Locking: Lock resources during booking to prevent conflicts but can reduce concurrency.
  • Queueing Requests: Serialize booking requests per resource to avoid conflicts.
  • Horizontal Scaling: Add more application servers behind load balancers to handle more requests.
  • Database Sharding: Partition booking data by resource or region to reduce contention.
  • Caching Availability: Use cache to reduce DB reads, but ensure cache consistency.
  • Conflict Resolution Service: Use a dedicated service or distributed lock manager (e.g., Redis Redlock) to coordinate bookings.
  • Eventual Consistency: For less critical bookings, allow temporary conflicts and resolve asynchronously.
Back-of-Envelope Cost Analysis
  • At 1,000 booking requests/sec, DB needs to handle ~1,000 QPS with atomic checks.
  • Each booking record ~1 KB, 1M bookings/day -> ~1 GB storage/day.
  • Network bandwidth depends on request size; assume 1 KB/request -> ~1 MB/s at 1,000 QPS.
  • Locking and retries increase CPU usage on DB and app servers.
  • Scaling DB horizontally and adding caching reduces cost per request.
Interview Tip

Start by explaining the booking flow and where conflicts happen. Identify the database as the bottleneck due to atomicity needs. Discuss trade-offs between optimistic and pessimistic locking. Propose scaling by sharding and caching. Mention fallback strategies like queueing or eventual consistency. Keep answers structured: problem, bottleneck, solution, trade-offs.

Self-Check Question

Your database handles 1000 QPS for booking requests. Traffic grows 10x to 10,000 QPS. What do you do first and why?

Answer: The first step is to reduce database contention by implementing optimistic locking and adding read replicas for scaling reads. Also, consider sharding booking data to distribute load. This prevents the DB from becoming a bottleneck due to locking delays.

Key Result
Booking conflict resolution first breaks at the database due to locking and contention. Scaling requires locking strategies, sharding, caching, and horizontal scaling of app and DB layers.