Cache stampede prevention in Redis - Time & Space Complexity
When using Redis to prevent cache stampedes, we want to see how the time to handle requests changes as more users ask for data.
We ask: How does the system handle many requests trying to update the cache at once?
Analyze the time complexity of the following Redis commands used to prevent cache stampedes.
# Try to get cached data
GET cache_key
# If missing, try to set a lock
SET lock_key "1" NX PX 5000
# If lock acquired, fetch data from DB and update cache
SET cache_key data PX 60000
DEL lock_key
This code tries to get data from cache. If missing, it sets a lock to let only one process update the cache, avoiding many simultaneous DB calls.
Look for repeated actions that affect time.
- Primary operation: Multiple clients try GET and SET commands concurrently.
- How many times: Once per client request, but only one client sets the lock and updates cache.
As more clients request data, most do a quick GET. Only one client does the heavier update work.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 GETs, 1 SET lock + 1 SET cache + 1 DEL lock |
| 100 | 100 GETs, 1 SET lock + 1 SET cache + 1 DEL lock |
| 1000 | 1000 GETs, 1 SET lock + 1 SET cache + 1 DEL lock |
Pattern observation: Most work stays the same no matter how many clients ask, because only one updates the cache.
Time Complexity: O(n)
This means the total time grows linearly with the number of requests, but the expensive cache update happens only once.
[X] Wrong: "Every request will update the cache, so time grows very fast."
[OK] Correct: The lock ensures only one request updates the cache, so others just read quickly.
Understanding how to control work in Redis when many users ask at once shows you can design systems that stay fast and reliable under pressure.
"What if we removed the lock and let all clients update the cache? How would the time complexity change?"