0
0
Data Structures Theoryknowledge~10 mins

LRU cache design with hash map and doubly linked list in Data Structures Theory - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - LRU cache design with hash map and doubly linked list
Start: Cache empty
Insert new key
Is cache full?
NoAdd node at head of DLL
Add key-node to hashmap
Remove tail node
Add new node at head
Update hashmap with new node
Access key?
Yes
Move node to head
Return value
End
The cache stores keys in a doubly linked list (DLL) for order and a hashmap for fast access. On insert, if full, remove tail node (least recently used). On access, move node to head (most recently used).
Execution Sample
Data Structures Theory
cache = LRUCache(2)
cache.put(1, 'A')
cache.put(2, 'B')
cache.get(1)
cache.put(3, 'C')
cache.get(2)
This code creates an LRU cache of size 2, adds two items, accesses one to update usage, adds a third causing eviction, then tries to access an evicted key.
Analysis Table
StepOperationHashmap StateDLL State (Head->Tail)EvictionOutput
1put(1, 'A'){1:'A'}[1:A]NoNone
2put(2, 'B'){1:'A', 2:'B'}[2:B <-> 1:A]NoNone
3get(1){1:'A', 2:'B'}[1:A <-> 2:B]No'A'
4put(3, 'C'){1:'A', 3:'C'}[3:C <-> 1:A]Evict key 2None
5get(2){1:'A', 3:'C'}[3:C <-> 1:A]No-1 (not found)
💡 Step 5: get(2) returns -1 because key 2 was evicted in step 4.
State Tracker
VariableStartAfter Step 1After Step 2After Step 3After Step 4After Step 5
Hashmap{}{1:'A'}{1:'A', 2:'B'}{1:'A', 2:'B'}{1:'A', 3:'C'}{1:'A', 3:'C'}
DLL Headnull1:A2:B1:A3:C3:C
DLL Tailnull1:A1:A2:B1:A1:A
Cache Size012222
Key Insights - 3 Insights
Why do we move a node to the head of the DLL on access?
Moving the accessed node to the head marks it as most recently used, so it won't be evicted soon. See step 3 in execution_table where get(1) moves key 1 to head.
Why do we remove the tail node when the cache is full?
The tail node is the least recently used item. Removing it frees space for new entries. Step 4 shows eviction of key 2 at the tail when adding key 3.
How does the hashmap help in this design?
The hashmap allows O(1) access to nodes in the DLL by key, avoiding traversal. This is why get operations are fast, as shown in steps 3 and 5.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 3. What is the DLL state after get(1)?
A[1:A <-> 2:B]
B[2:B <-> 1:A]
C[1:A]
D[2:B]
💡 Hint
Check the DLL State column at step 3 in execution_table.
At which step does the cache evict a key due to capacity?
AStep 2
BStep 3
CStep 4
DStep 5
💡 Hint
Look for 'Eviction' column in execution_table.
If we increase cache size to 3, what would happen at step 4?
AEviction of key 2 still happens
BNo eviction, key 3 added at head
CCache becomes empty
DKey 1 is evicted
💡 Hint
Consider cache capacity and eviction logic in concept_flow.
Concept Snapshot
LRU Cache uses a hashmap + doubly linked list.
Hashmap: O(1) key-node lookup.
DLL: Maintains usage order, head = most recent.
On get: move node to head.
On put: add at head, evict tail if full.
Eviction removes least recently used item.
Full Transcript
An LRU cache stores data so that the most recently used items stay available, and the least recently used items get removed when full. It uses a hashmap for quick key lookup and a doubly linked list to track usage order. When you add a new item and the cache is full, it removes the tail node (the least recently used). When you access an item, it moves that item to the head of the list to mark it as recently used. This design ensures fast access and efficient eviction.