C++ Program to Implement LRU Cache Efficiently
std::list to track usage order and std::unordered_map for fast access, where get and put operations update the list to keep the most recently used items at the front.Examples
How to Think About It
Algorithm
Code
#include <iostream> #include <list> #include <unordered_map> class LRUCache { int capacity; std::list<int> keys; std::unordered_map<int, std::pair<int, std::list<int>::iterator>> cache; public: LRUCache(int cap) : capacity(cap) {} int get(int key) { if (cache.find(key) == cache.end()) return -1; keys.erase(cache[key].second); keys.push_front(key); cache[key].second = keys.begin(); return cache[key].first; } void put(int key, int value) { if (cache.find(key) != cache.end()) { keys.erase(cache[key].second); } else if (keys.size() == capacity) { int old_key = keys.back(); keys.pop_back(); cache.erase(old_key); } keys.push_front(key); cache[key] = {value, keys.begin()}; } }; int main() { LRUCache cache(2); cache.put(1, 10); cache.put(2, 20); std::cout << cache.get(1) << std::endl; // 10 cache.put(3, 30); std::cout << cache.get(2) << std::endl; // -1 return 0; }
Dry Run
Let's trace the example with operations: put(1,10), put(2,20), get(1), put(3,30), get(2).
put(1, 10)
Cache empty; add key 1 with value 10; keys list: [1]
put(2, 20)
Add key 2 with value 20; keys list: [2, 1]
get(1)
Key 1 found; move to front; keys list: [1, 2]; return 10
put(3, 30)
Cache full; remove least used key 2; add key 3; keys list: [3, 1]
get(2)
Key 2 not found; return -1
| Operation | Keys List | Cache Contents | Returned Value |
|---|---|---|---|
| put(1,10) | [1] | {1:10} | - |
| put(2,20) | [2,1] | {1:10, 2:20} | - |
| get(1) | [1,2] | {1:10, 2:20} | 10 |
| put(3,30) | [3,1] | {1:10, 3:30} | - |
| get(2) | [3,1] | {1:10, 3:30} | -1 |
Why This Works
Step 1: Use list and map
The list keeps keys in order of use, and the map stores key-value pairs with quick access to list positions.
Step 2: Update order on access
When a key is accessed or added, it moves to the front of the list to mark it as most recently used.
Step 3: Evict least used key
If the cache is full, the key at the back of the list (least recently used) is removed from both the list and map.
Alternative Approaches
#include <iostream> #include <deque> #include <unordered_map> #include <algorithm> class LRUCache { int capacity; std::deque<int> dq; std::unordered_map<int, int> cache; public: LRUCache(int cap) : capacity(cap) {} int get(int key) { if (cache.find(key) == cache.end()) return -1; // Move key to front by removing and pushing auto it = std::find(dq.begin(), dq.end(), key); if (it != dq.end()) dq.erase(it); dq.push_front(key); return cache[key]; } void put(int key, int value) { if (cache.find(key) != cache.end()) { auto it = std::find(dq.begin(), dq.end(), key); if (it != dq.end()) dq.erase(it); } else if (dq.size() == capacity) { int old_key = dq.back(); dq.pop_back(); cache.erase(old_key); } dq.push_front(key); cache[key] = value; } }; int main() { LRUCache cache(2); cache.put(1, 10); cache.put(2, 20); std::cout << cache.get(1) << std::endl; cache.put(3, 30); std::cout << cache.get(2) << std::endl; return 0; }
#include <iostream> #include <map> #include <unordered_map> class LRUCache { int capacity, time = 0; std::unordered_map<int, std::pair<int, int>> cache; // key -> {value, time} std::map<int, int> time_map; // time -> key public: LRUCache(int cap) : capacity(cap) {} int get(int key) { if (cache.find(key) == cache.end()) return -1; int old_time = cache[key].second; time_map.erase(old_time); cache[key].second = ++time; time_map[time] = key; return cache[key].first; } void put(int key, int value) { if (cache.find(key) != cache.end()) { int old_time = cache[key].second; time_map.erase(old_time); } else if (cache.size() == capacity) { auto it = time_map.begin(); int old_key = it->second; time_map.erase(it); cache.erase(old_key); } cache[key] = {value, ++time}; time_map[time] = key; } }; int main() { LRUCache cache(2); cache.put(1, 10); cache.put(2, 20); std::cout << cache.get(1) << std::endl; cache.put(3, 30); std::cout << cache.get(2) << std::endl; return 0; }
Complexity: O(1) for get and put time, O(capacity) space
Time Complexity
Using a list and unordered_map allows both get and put operations to run in constant time by quickly accessing and updating the usage order.
Space Complexity
The cache stores up to capacity key-value pairs and their positions, so space grows linearly with capacity.
Which Approach is Fastest?
The list + unordered_map approach is fastest due to O(1) removals and insertions; alternatives like deque or timestamp maps have slower operations.
| Approach | Time | Space | Best For |
|---|---|---|---|
| List + Unordered_Map | O(1) | O(capacity) | Fastest get/put with order tracking |
| Deque + Unordered_Map | O(n) for removal | O(capacity) | Simpler but slower removals |
| Timestamp + Map | O(log n) | O(capacity) | Tracks usage with timestamps, more overhead |