0
0
LLDsystem_design~10 mins

Why booking tests availability and concurrency in LLD - Scalability Evidence

Choose your learning style9 modes available
Scalability Analysis - Why booking tests availability and concurrency
Growth Table: Booking Tests Availability and Concurrency
UsersRequests per SecondConcurrent BookingsDatabase LoadSystem Changes
100 users~10-50 RPS~5-10 concurrentLow, single DB instanceSimple locking, basic availability checks
10,000 users~1,000 RPS~200-300 concurrentModerate, DB nearing capacityIntroduce caching, connection pooling, read replicas
1,000,000 users~50,000 RPS~5,000 concurrentHigh, DB bottleneck likelySharding, distributed locking, queueing for concurrency control
100,000,000 users~5,000,000 RPS~500,000 concurrentVery high, multiple DB clustersGlobal distribution, advanced concurrency control, event sourcing
First Bottleneck

The database is the first bottleneck because booking tests require checking and updating availability atomically to avoid double bookings. As concurrency grows, locking and transaction conflicts increase, causing delays and failures.

Scaling Solutions
  • Horizontal scaling: Add more application servers behind load balancers to handle more concurrent requests.
  • Database read replicas: Offload read queries to replicas to reduce load on the primary DB.
  • Caching: Cache availability data to reduce DB hits, with short TTL to keep data fresh.
  • Sharding: Partition booking data by region or test center to reduce contention.
  • Distributed locking or optimistic concurrency: Use Redis or Zookeeper to manage locks or version checks to prevent double bookings.
  • Queueing: Serialize booking requests in a queue to control concurrency and avoid conflicts.
  • Event sourcing: Use event logs to track bookings and rebuild state, improving consistency at scale.
Back-of-Envelope Cost Analysis
  • At 10,000 users: ~1,000 RPS -> DB must handle ~1,000 writes/reads per second.
  • Storage: Each booking record ~1 KB, 1M bookings = ~1 GB storage.
  • Bandwidth: Assuming 1 KB per request/response, 1,000 RPS = ~1 MB/s network usage.
  • Concurrency control adds latency; expect 50-100 ms per booking transaction at scale.
Interview Tip

Start by identifying the critical resource (database) and why concurrency causes issues. Discuss how availability checks must be atomic to prevent double bookings. Then explain scaling steps: caching, read replicas, sharding, and concurrency control mechanisms. Always justify why each solution fits the bottleneck.

Self Check

Your database handles 1000 QPS. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Add read replicas and implement caching to reduce load on the primary database before considering sharding or more complex solutions.

Key Result
Booking test availability systems first hit database bottlenecks due to concurrency and atomic availability checks; scaling requires caching, read replicas, sharding, and distributed concurrency control.