0
0
Redisquery~10 mins

Denormalization for speed in Redis - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Denormalization for speed
Start with normalized data
Identify frequent joins/queries
Duplicate related data into one record
Store denormalized data in Redis
Query single denormalized record
Faster read, less joins
Update duplicated data carefully
Denormalization copies related data into one place to speed up reads by avoiding joins, especially in Redis where data is stored as key-value pairs.
Execution Sample
Redis
HMSET user:1 id 1 name "Alice" age 30
HMSET order:101 id 101 user_id 1 total 50
// denormalized record
HMSET user_order:101 id 101 user_id 1 user_name "Alice" total 50
HGETALL user_order:101
Shows storing user and order separately, then a denormalized combined record for faster access.
Execution Table
StepActionData StoredReason
1Store user datauser:1 -> {id:1, name:"Alice", age:30}Normalized user info
2Store order dataorder:101 -> {id:101, user_id:1, total:50}Normalized order info
3Create denormalized recorduser_order:101 -> {id:101, user_id:1, user_name:"Alice", total:50}Combine user and order for fast read
4Query user_order:101{id:101, user_id:1, user_name:"Alice", total:50}Single read, no join needed
5Update user nameUpdate user:1 and user_order:101Must update duplicated data to keep consistency
6Query user_order:101 again{id:101, user_id:1, user_name:"Alice", total:50}Fast read with updated data
7End-Denormalization speeds up reads but requires careful updates
💡 Denormalization stops when data is combined for fast reads, but updates must keep duplicates consistent
Variable Tracker
KeyInitialAfter Step 1After Step 2After Step 3After Step 5Final
user:1none{id:1, name:"Alice", age:30}{id:1, name:"Alice", age:30}{id:1, name:"Alice", age:30}{id:1, name:"Alice", age:30}{id:1, name:"Alice", age:30}
order:101nonenone{id:101, user_id:1, total:50}{id:101, user_id:1, total:50}{id:101, user_id:1, total:50}{id:101, user_id:1, total:50}
user_order:101nonenonenone{id:101, user_id:1, user_name:"Alice", total:50}{id:101, user_id:1, user_name:"Alice", total:50}{id:101, user_id:1, user_name:"Alice", total:50}
Key Moments - 3 Insights
Why do we store the user name twice in both user:1 and user_order:101?
Because denormalization duplicates data to avoid joins during reads, so user_order:101 has user_name for faster access (see execution_table step 3).
What happens if we update user:1 but forget to update user_order:101?
The denormalized data becomes inconsistent, causing stale reads from user_order:101 (see execution_table step 5).
Why is querying user_order:101 faster than joining user:1 and order:101?
Because all needed data is in one record, so Redis fetches one key instead of multiple keys and combining them (see execution_table step 4).
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 3, what data is stored in user_order:101?
A{id:101, user_id:1, total:50}
B{id:101, user_id:1, user_name:"Alice", total:50}
C{id:1, name:"Alice", age:30}
DNone
💡 Hint
Check the 'Data Stored' column in execution_table row for step 3
At which step does the denormalized record get updated after a user name change?
AStep 5
BStep 3
CStep 2
DStep 6
💡 Hint
Look for the step mentioning update of duplicated data in execution_table
If we remove the denormalized record user_order:101, what happens to read speed?
AIt becomes faster
BIt stays the same
CIt becomes slower due to multiple reads
DIt causes errors
💡 Hint
Denormalization avoids multiple reads by combining data, see concept_flow and execution_table
Concept Snapshot
Denormalization stores duplicated related data together to speed up reads.
In Redis, this means combining user and order info into one key.
This avoids multiple lookups and joins.
Updates must keep duplicated data consistent.
Denormalization trades write complexity for read speed.
Full Transcript
Denormalization for speed means copying related data into one place to avoid multiple lookups. In Redis, we store user data and order data separately, then create a combined denormalized record with user and order info together. This lets us read all needed data with one key fetch, making reads faster. However, when user data changes, we must update the denormalized record too to keep data consistent. The execution steps show storing user and order, creating the denormalized record, reading it fast, updating duplicates, and reading again. This method improves read speed but requires careful updates.