0
0
CppProgramIntermediate · 2 min read

C++ Program to Implement LRU Cache Efficiently

An LRU cache in C++ can be implemented using std::list to track usage order and std::unordered_map for fast access, where get and put operations update the list to keep the most recently used items at the front.
📋

Examples

Inputput(1, 10), put(2, 20), get(1), put(3, 30), get(2)
Output10, -1
Inputput(1, 100), put(2, 200), put(3, 300), get(3), put(4, 400), get(1)
Output300, -1
Inputget(5)
Output-1
🧠

How to Think About It

To build an LRU cache, use a list to keep track of the order in which keys are used, with the most recent at the front. Use a map to quickly find keys and their positions in the list. When a key is accessed or added, move it to the front. If the cache is full, remove the least recently used key from the back.
📐

Algorithm

1
Initialize a list to store keys in usage order and a map to store key-value pairs and iterators to the list.
2
For get(key), check if key exists in map; if not, return -1.
3
If key exists, move it to the front of the list to mark it as recently used and return its value.
4
For put(key, value), if key exists, update value and move key to front.
5
If key does not exist, add it to the front; if cache size exceeds capacity, remove the key at the back from both list and map.
💻

Code

cpp
#include <iostream>
#include <list>
#include <unordered_map>

class LRUCache {
    int capacity;
    std::list<int> keys;
    std::unordered_map<int, std::pair<int, std::list<int>::iterator>> cache;

public:
    LRUCache(int cap) : capacity(cap) {}

    int get(int key) {
        if (cache.find(key) == cache.end()) return -1;
        keys.erase(cache[key].second);
        keys.push_front(key);
        cache[key].second = keys.begin();
        return cache[key].first;
    }

    void put(int key, int value) {
        if (cache.find(key) != cache.end()) {
            keys.erase(cache[key].second);
        } else if (keys.size() == capacity) {
            int old_key = keys.back();
            keys.pop_back();
            cache.erase(old_key);
        }
        keys.push_front(key);
        cache[key] = {value, keys.begin()};
    }
};

int main() {
    LRUCache cache(2);
    cache.put(1, 10);
    cache.put(2, 20);
    std::cout << cache.get(1) << std::endl; // 10
    cache.put(3, 30);
    std::cout << cache.get(2) << std::endl; // -1
    return 0;
}
Output
10 -1
🔍

Dry Run

Let's trace the example with operations: put(1,10), put(2,20), get(1), put(3,30), get(2).

1

put(1, 10)

Cache empty; add key 1 with value 10; keys list: [1]

2

put(2, 20)

Add key 2 with value 20; keys list: [2, 1]

3

get(1)

Key 1 found; move to front; keys list: [1, 2]; return 10

4

put(3, 30)

Cache full; remove least used key 2; add key 3; keys list: [3, 1]

5

get(2)

Key 2 not found; return -1

OperationKeys ListCache ContentsReturned Value
put(1,10)[1]{1:10}-
put(2,20)[2,1]{1:10, 2:20}-
get(1)[1,2]{1:10, 2:20}10
put(3,30)[3,1]{1:10, 3:30}-
get(2)[3,1]{1:10, 3:30}-1
💡

Why This Works

Step 1: Use list and map

The list keeps keys in order of use, and the map stores key-value pairs with quick access to list positions.

Step 2: Update order on access

When a key is accessed or added, it moves to the front of the list to mark it as most recently used.

Step 3: Evict least used key

If the cache is full, the key at the back of the list (least recently used) is removed from both the list and map.

🔄

Alternative Approaches

Using std::deque and unordered_map
cpp
#include <iostream>
#include <deque>
#include <unordered_map>
#include <algorithm>

class LRUCache {
    int capacity;
    std::deque<int> dq;
    std::unordered_map<int, int> cache;

public:
    LRUCache(int cap) : capacity(cap) {}

    int get(int key) {
        if (cache.find(key) == cache.end()) return -1;
        // Move key to front by removing and pushing
        auto it = std::find(dq.begin(), dq.end(), key);
        if (it != dq.end()) dq.erase(it);
        dq.push_front(key);
        return cache[key];
    }

    void put(int key, int value) {
        if (cache.find(key) != cache.end()) {
            auto it = std::find(dq.begin(), dq.end(), key);
            if (it != dq.end()) dq.erase(it);
        } else if (dq.size() == capacity) {
            int old_key = dq.back();
            dq.pop_back();
            cache.erase(old_key);
        }
        dq.push_front(key);
        cache[key] = value;
    }
};

int main() {
    LRUCache cache(2);
    cache.put(1, 10);
    cache.put(2, 20);
    std::cout << cache.get(1) << std::endl;
    cache.put(3, 30);
    std::cout << cache.get(2) << std::endl;
    return 0;
}
This approach uses deque but has slower key removal (linear time) compared to list iterator removal.
Using std::map with timestamps
cpp
#include <iostream>
#include <map>
#include <unordered_map>

class LRUCache {
    int capacity, time = 0;
    std::unordered_map<int, std::pair<int, int>> cache; // key -> {value, time}
    std::map<int, int> time_map; // time -> key

public:
    LRUCache(int cap) : capacity(cap) {}

    int get(int key) {
        if (cache.find(key) == cache.end()) return -1;
        int old_time = cache[key].second;
        time_map.erase(old_time);
        cache[key].second = ++time;
        time_map[time] = key;
        return cache[key].first;
    }

    void put(int key, int value) {
        if (cache.find(key) != cache.end()) {
            int old_time = cache[key].second;
            time_map.erase(old_time);
        } else if (cache.size() == capacity) {
            auto it = time_map.begin();
            int old_key = it->second;
            time_map.erase(it);
            cache.erase(old_key);
        }
        cache[key] = {value, ++time};
        time_map[time] = key;
    }
};

int main() {
    LRUCache cache(2);
    cache.put(1, 10);
    cache.put(2, 20);
    std::cout << cache.get(1) << std::endl;
    cache.put(3, 30);
    std::cout << cache.get(2) << std::endl;
    return 0;
}
This uses timestamps to track usage but adds overhead of maintaining a sorted map.

Complexity: O(1) for get and put time, O(capacity) space

Time Complexity

Using a list and unordered_map allows both get and put operations to run in constant time by quickly accessing and updating the usage order.

Space Complexity

The cache stores up to capacity key-value pairs and their positions, so space grows linearly with capacity.

Which Approach is Fastest?

The list + unordered_map approach is fastest due to O(1) removals and insertions; alternatives like deque or timestamp maps have slower operations.

ApproachTimeSpaceBest For
List + Unordered_MapO(1)O(capacity)Fastest get/put with order tracking
Deque + Unordered_MapO(n) for removalO(capacity)Simpler but slower removals
Timestamp + MapO(log n)O(capacity)Tracks usage with timestamps, more overhead
💡
Use a list and map together to achieve O(1) time for both get and put operations in an LRU cache.
⚠️
Beginners often forget to update the usage order when accessing or inserting keys, breaking the LRU logic.