Python Program to Implement LRU Cache
@functools.lru_cache(maxsize=3) decorator on a function to cache its results with a limit of 3 items.Examples
How to Think About It
Algorithm
Code
from functools import lru_cache @lru_cache(maxsize=3) def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) print(fib(3)) # Output: 2 print(fib(5)) # Output: 5 print(fib(10)) # Output: 55
Dry Run
Let's trace fib(3) through the code with LRU cache max size 3.
Call fib(3)
Cache is empty, compute fib(3) = fib(2) + fib(1)
Compute fib(2)
Cache miss, compute fib(2) = fib(1) + fib(0)
Compute fib(1) and fib(0)
fib(1) = 1, fib(0) = 0, store both in cache
Return fib(2)
fib(2) = 1 + 0 = 1, store in cache
Return fib(3)
fib(3) = fib(2) + fib(1) = 1 + 1 = 2, store in cache
| Call | Cache Contents |
|---|---|
| fib(1) | {fib(1):1} |
| fib(0) | {fib(1):1, fib(0):0} |
| fib(2) | {fib(1):1, fib(0):0, fib(2):1} |
| fib(3) | {fib(0):0, fib(2):1, fib(3):2} (evicts oldest if maxsize exceeded) |
Why This Works
Step 1: Using @lru_cache decorator
The @lru_cache decorator automatically caches function results and manages the cache size.
Step 2: Cache stores recent calls
It keeps only the most recent maxsize calls and removes the least recently used when full.
Step 3: Faster repeated calls
When the function is called again with the same argument, it returns the cached result instantly without recomputing.
Alternative Approaches
from collections import OrderedDict class LRUCache: def __init__(self, capacity): self.cache = OrderedDict() self.capacity = capacity def get(self, key): if key not in self.cache: return -1 self.cache.move_to_end(key) return self.cache[key] def put(self, key, value): if key in self.cache: self.cache.move_to_end(key) self.cache[key] = value if len(self.cache) > self.capacity: self.cache.popitem(last=False) cache = LRUCache(2) cache.put(1, 1) cache.put(2, 2) print(cache.get(1)) # 1 cache.put(3, 3) print(cache.get(2)) # -1 (evicted)
from functools import cache @cache def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) print(fib(10)) # 55
Complexity: O(1) average for cache hits, O(n) for cache misses in recursive calls time, O(maxsize) for storing cached results space
Time Complexity
Cache hits return results instantly in O(1) time. Cache misses compute recursively, which can be expensive but reduced by caching.
Space Complexity
The cache stores up to maxsize results, so space grows linearly with cache size.
Which Approach is Fastest?
Using @lru_cache is fastest and simplest for most cases. Manual cache offers control but more code. Unlimited cache uses more memory.
| Approach | Time | Space | Best For |
|---|---|---|---|
| functools.lru_cache | O(1) cache hit | O(maxsize) | Simple, efficient caching |
| Manual OrderedDict | O(1) cache hit | O(capacity) | Custom cache behavior |
| functools.cache | O(1) cache hit | O(n) | Unlimited cache, simple but memory heavy |
@lru_cache decorator to add caching easily without extra code.maxsize can cause the cache to grow indefinitely and use too much memory.