Why caching patterns matter in Redis - Performance Analysis
When using caching with Redis, the way we store and retrieve data affects how fast our app runs.
We want to know how the time to get data changes as we use different caching patterns.
Analyze the time complexity of this caching pattern in Redis.
# Cache user data with a key
SET user:123 "{\"name\": \"Alice\", \"age\": 30}"
# Retrieve cached user data
GET user:123
# Cache a list of user IDs
LPUSH users 123
LPUSH users 456
# Retrieve all user IDs
LRANGE users 0 -1
This code stores individual user data and a list of user IDs, then retrieves them.
Look for repeated actions that take time.
- Primary operation: Retrieving all user IDs with
LRANGEwhich reads each item in the list. - How many times: Once per user ID in the list, so it grows with the number of users.
Getting one user by key is quick and stays fast no matter how many users there are.
But getting all user IDs from the list takes longer as the list grows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 users | 10 operations to get all IDs |
| 100 users | 100 operations to get all IDs |
| 1000 users | 1000 operations to get all IDs |
Pattern observation: Retrieving the whole list grows linearly with the number of users.
Time Complexity: O(n)
This means the time to get all user IDs grows directly with how many users are stored.
[X] Wrong: "Getting all items from a Redis list is always fast and constant time."
[OK] Correct: Retrieving all items requires reading each one, so time grows with list size.
Understanding how caching patterns affect speed helps you design fast apps and answer questions about efficiency clearly.
What if we changed the list of user IDs to a Redis set? How would the time complexity of retrieving all IDs change?