Denormalization for speed in Redis - Time & Space Complexity
Denormalization means storing data in a way that avoids extra lookups. This helps speed up reading data in Redis.
We want to see how the time to get data changes when we use denormalization.
Analyze the time complexity of the following Redis commands for fetching user info.
# Normalized approach
HGETALL user:1000
HGETALL profile:1000
# Denormalized approach
HGETALL user_profile:1000
This code shows fetching user data from two hashes separately versus one combined hash.
Look at what repeats when fetching data.
- Primary operation: Reading hash fields with HGETALL.
- How many times: Normalized reads two hashes separately; denormalized reads one combined hash once.
As the number of fields grows, the time to read grows too.
| Input Size (fields) | Normalized Reads (2 hashes) | Denormalized Read (1 hash) |
|---|---|---|
| 10 | ~20 field reads | ~10 field reads |
| 100 | ~200 field reads | ~100 field reads |
| 1000 | ~2000 field reads | ~1000 field reads |
Pattern observation: Normalized approach doubles the work because it reads two hashes; denormalized reads once, so work grows linearly with fields.
Time Complexity: O(n)
This means the time to read data grows directly with the number of fields you fetch.
[X] Wrong: "Denormalization always makes data fetching instant regardless of size."
[OK] Correct: Even with denormalization, reading more fields takes more time because Redis reads each field once.
Understanding how denormalization affects read speed helps you design fast Redis queries and shows you think about real-world data access patterns.
"What if we split the denormalized hash into three smaller hashes instead of one big one? How would the time complexity change?"