| Users (People) | Requests per Minute | Elevators Needed | System Complexity | Latency (Wait Time) |
|---|---|---|---|---|
| 100 | 50 | 1 | Simple: Single elevator, direct request handling | Low (seconds) |
| 10,000 | 5,000 | 10-20 | Moderate: Multiple elevators, request queueing, scheduling | Moderate (tens of seconds) |
| 1,000,000 | 500,000 | 100+ | Complex: Distributed control, load balancing, fault tolerance | Variable, optimized by scheduling algorithms |
| 100,000,000 | 50,000,000 | Thousands | Very complex: Multi-building coordination, advanced predictive scheduling | Challenging, requires AI and real-time analytics |
Elevator, Floor, Request classes in LLD - Scalability & System Analysis
At small scale (100 users), the bottleneck is the elevator's mechanical speed and door operation time.
At medium scale (10,000 users), the bottleneck is the request processing and scheduling logic in the control system, as many requests compete for limited elevators.
At large scale (1 million+ users), the bottleneck shifts to communication and coordination between multiple elevator controllers and the central system, plus data processing delays.
- Horizontal Scaling: Add more elevators to serve more requests concurrently.
- Load Balancing: Distribute requests evenly among elevators to avoid congestion.
- Caching: Cache frequent floor requests or patterns to optimize scheduling.
- Sharding: Divide floors or buildings into zones, each managed by separate controllers.
- Predictive Scheduling: Use historical data to anticipate demand and pre-position elevators.
- Fault Tolerance: Implement fallback mechanisms if an elevator or controller fails.
- Requests per second at 10,000 users: ~83 (5,000 requests/min ÷ 60)
- Each elevator can handle ~5-10 requests per minute depending on speed and stops.
- Storage for request logs: Minimal, a few KB per request, scaling to GBs at large scale.
- Network bandwidth: Low per request, mostly control signals; scales with number of elevators and controllers.
- CPU: Scheduling algorithms run in milliseconds; more elevators increase CPU needs linearly.
Start by defining the scale and key components: elevators, floors, requests.
Discuss how requests flow from users to elevators and how scheduling works.
Identify bottlenecks at each scale and propose targeted solutions.
Use real-world analogies like traffic lights or taxi dispatch to explain scheduling.
Always mention trade-offs: cost vs. latency vs. complexity.
Your elevator control system handles 1000 requests per minute. Traffic grows 10x. What do you do first?
Answer: Add more elevators (horizontal scaling) and improve scheduling algorithms to handle increased requests efficiently.
