Why rate limiting protects services in Rest API - Performance Analysis
When we protect services with rate limiting, we want to see how the cost of handling requests grows as more users send requests.
We ask: How does the system's work increase when many requests come in?
Analyze the time complexity of the following rate limiting check.
// Pseudocode for rate limiting check
function checkRateLimit(userId, timestamp) {
requests = getRequestsForUser(userId) // list of past request times
recentRequests = filter(requests, r => r > timestamp - window)
if (recentRequests.length > limit) {
return false // block request
}
addRequest(userId, timestamp)
return true // allow request
}
This code checks how many requests a user made recently and blocks if too many.
Look for repeated work done each time a request comes in.
- Primary operation: Filtering the user's past requests to find recent ones.
- How many times: This filtering happens every time a new request arrives.
As the number of requests a user makes grows, the filtering step takes longer.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Check 10 past requests |
| 100 | Check 100 past requests |
| 1000 | Check 1000 past requests |
Pattern observation: The work grows directly with how many past requests are stored.
Time Complexity: O(n)
This means the time to check rate limits grows linearly with the number of stored requests.
[X] Wrong: "Rate limiting always takes constant time no matter how many requests."
[OK] Correct: Because filtering past requests depends on how many requests are stored, the time can grow as more requests accumulate.
Understanding how rate limiting scales helps you design services that stay fast and reliable even when many users send requests.
"What if we stored only the count of requests instead of the full list? How would the time complexity change?"