Rate limiting with sliding window in Redis - Time & Space Complexity
When using Redis to limit how often a user can do something, we want to know how the time to check and update limits changes as more requests happen.
We ask: How does the work grow when many requests come in quickly?
Analyze the time complexity of the following Redis commands for sliding window rate limiting.
-- Current timestamp in milliseconds
local current_time = redis.call('TIME')
local now = tonumber(current_time[1]) * 1000 + tonumber(current_time[2]) / 1000
local key = KEYS[1]
local window = tonumber(ARGV[1])
local limit = tonumber(ARGV[2])
-- Remove old entries outside the window
redis.call('ZREMRANGEBYSCORE', key, 0, now - window)
-- Add current request timestamp
redis.call('ZADD', key, now, now)
-- Count requests in window
local count = redis.call('ZCOUNT', key, now - window, now)
return count <= limit
This code keeps timestamps of requests in a sorted set, removes old ones, adds the new one, and checks if the count is within the limit.
Look for repeated work that depends on input size.
- Primary operation: Removing old timestamps with
ZREMRANGEBYSCOREand counting withZCOUNT. - How many times: The removal scans all timestamps outside the window, which depends on how many old requests exist.
As more requests happen, the sorted set grows, but old entries get removed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 removals and O(1) count |
| 100 | About 100 removals and O(1) count |
| 1000 | About 1000 removals and O(1) count |
Pattern observation: The work grows roughly with the number of stored timestamps, which depends on the request rate and window size.
Time Complexity: O(n)
This means the time to check and update the limit grows linearly with the number of requests stored in the window.
[X] Wrong: "The operations always take constant time regardless of request count."
[OK] Correct: Removing old entries and counting depend on how many timestamps are stored, so more requests mean more work.
Understanding how Redis commands scale with data size helps you design efficient rate limiting, a common real-world problem.
What if we used a fixed-size queue instead of a sorted set? How would the time complexity change?