| Users/Requests | Disk Requests per Second | Queue Length | Response Time | Resource Usage |
|---|---|---|---|---|
| 100 requests/sec | 100 | Short queue | Low latency | Single disk, CPU low |
| 10,000 requests/sec | 10,000 | Medium queue | Moderate latency | Disk busy, CPU moderate |
| 1,000,000 requests/sec | 1,000,000 | Long queue | High latency | Disk saturated, CPU high |
| 100,000,000 requests/sec | 100,000,000 | Very long queue | Very high latency | Disk overloaded, CPU maxed |
Scheduling algorithm (SCAN, LOOK) in LLD - Scalability & System Analysis
The disk I/O is the first bottleneck because SCAN and LOOK algorithms optimize disk head movement but cannot increase physical disk speed. As requests grow, the disk queue lengthens, causing delays.
- Horizontal scaling: Add more disks and distribute requests (e.g., RAID, sharding data across disks).
- Caching: Use memory caches to reduce disk reads for repeated data.
- Upgrade hardware: Use SSDs or faster disks to reduce seek time.
- Load balancing: Distribute requests evenly to avoid hotspots.
- Algorithm tuning: Use LOOK to reduce unnecessary disk head movement compared to SCAN.
At 10,000 requests/sec, disk I/O bandwidth and seek time become critical. Each request may require 5-10 ms seek time on HDDs, limiting throughput to ~100-200 requests/sec per disk.
To handle 1,000,000 requests/sec, thousands of disks or SSDs are needed, increasing cost significantly.
CPU usage grows with queue management and scheduling overhead but is usually less critical than disk I/O.
Start by explaining how SCAN and LOOK reduce disk head movement to improve throughput. Then discuss physical disk limits as bottlenecks. Finally, propose scaling solutions like adding disks, caching, and upgrading hardware.
Your disk handles 1000 requests/sec. Traffic grows 10x to 10,000 requests/sec. What do you do first?
Answer: Add more disks and distribute requests to reduce queue length and avoid disk saturation.
