Cache-aside pattern in Redis - Time & Space Complexity
When using the cache-aside pattern, we want to know how the time to get data changes as the data size grows.
We ask: How does the number of steps grow when we check cache and then the database?
Analyze the time complexity of the following Redis cache-aside code snippet.
// Try to get data from cache
GET user:123
// If cache miss, get from database and update cache
if not found {
data = DB.GET("user:123")
SET user:123 data
}
// Return data
return data
This code tries to get user data from Redis cache first. If missing, it fetches from the database and updates the cache.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Single key lookup in cache and possibly one database fetch.
- How many times: Exactly once per data request.
Each request checks the cache once and may check the database once if needed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 cache lookups, up to 10 database fetches |
| 100 | 100 cache lookups, up to 100 database fetches |
| 1000 | 1000 cache lookups, up to 1000 database fetches |
Pattern observation: The number of operations grows directly with the number of requests.
Time Complexity: O(n)
This means the time to handle requests grows in a straight line as the number of requests increases.
[X] Wrong: "Cache-aside pattern reduces time complexity to constant time regardless of requests."
[OK] Correct: Each request still requires at least one cache check, so time grows with requests, not fixed.
Understanding how cache-aside scales helps you explain real-world data fetching strategies clearly and confidently.
"What if we batch multiple keys in one cache request? How would the time complexity change?"