You want to limit the number of requests a user can make to an API to 100 requests per minute. Which rate limiting algorithm is best suited to allow some bursts but still enforce the limit?
Think about which algorithm allows bursts but controls the average rate.
The token bucket algorithm allows bursts by accumulating tokens and enforces an average rate limit. Fixed window counters can cause spikes at window edges. Sliding window logs are accurate but expensive. Leaky bucket smooths out bursts but does not allow bursts.
You need to design a rate limiter that works across multiple servers handling user requests. Which component is essential to ensure consistent rate limiting across all servers?
Think about how to keep counters consistent across servers.
A centralized data store ensures all servers share the same counters, preventing users from bypassing limits by switching servers. Local counters or uncoordinated limiters can cause inconsistent enforcement.
Your API receives millions of requests per minute. Which approach best scales the rate limiter to handle this load without becoming a bottleneck?
Consider how to distribute load and avoid single points of failure.
Sharding counters across multiple Redis instances distributes load and avoids bottlenecks. A single Redis instance can become overwhelmed. Relational databases are too slow for this use case. In-memory counters lack persistence and consistency.
Which rate limiting approach trades off some accuracy for better performance and lower storage needs?
Think about which method uses less data but can cause bursts at window edges.
Fixed window counters are simple and fast but can cause bursts at window boundaries, reducing accuracy. Sliding window logs are accurate but expensive. Token bucket and leaky bucket offer different tradeoffs but usually require more state.
You expect 10 million users, each allowed 100 requests per minute. You want to store counters for each user in Redis with 8 bytes per counter. Estimate the minimum memory needed to store counters for all users for a 1-minute window.
Calculate: 10 million users * 8 bytes per counter.
10,000,000 users * 8 bytes = 80,000,000 bytes = ~80 MB. But Redis stores overhead per key, so actual memory is higher. Considering overhead, estimate about 800 MB.
