Cache-aside (lazy loading) deep dive in Redis - Time & Space Complexity
We want to understand how the time to get data changes when using cache-aside in Redis.
Specifically, how the number of operations grows when fetching data that may or may not be cached.
Analyze the time complexity of the following Redis cache-aside pattern.
# Try to get data from cache
GET user:123
# If cache miss, fetch from database
# Then store in cache
SET user:123 <data>
This code tries to get user data from Redis cache first. If not found, it fetches from the database and updates the cache.
Look for repeated actions that affect time.
- Primary operation: Single GET command to Redis, possibly followed by a database fetch and SET.
- How many times: Once per data request; no loops or recursion inside this snippet.
Each request involves one or two main operations: a cache GET and possibly a database fetch plus cache SET.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 GETs, some SETs if misses |
| 100 | About 100 GETs, some SETs if misses |
| 1000 | About 1000 GETs, some SETs if misses |
Pattern observation: The number of operations grows linearly with the number of requests.
Time Complexity: O(n)
This means the time grows directly in proportion to the number of data requests.
[X] Wrong: "Cache-aside always makes data fetching faster regardless of input size."
[OK] Correct: If many requests miss the cache, each miss causes a database fetch, which can slow things down as requests grow.
Understanding how cache-aside scales helps you explain real-world data fetching strategies clearly and confidently.
"What if we added a batch GET for multiple keys instead of single GETs? How would the time complexity change?"