Eviction policies (LRU, LFU, random) in Redis - Time & Space Complexity
When Redis runs out of memory, it removes some data using eviction policies.
We want to know how the time to pick which data to remove grows as the data size grows.
Analyze the time complexity of eviction policy operations in Redis.
// Example: LRU eviction check
if (memory_limit_reached) {
key_to_evict = find_least_recently_used_key();
delete_key(key_to_evict);
}
// For LFU, Redis updates usage counters and evicts least frequently used keys
// Random eviction picks a random key to delete
This code decides which key to remove when memory is full, using different policies.
Look at what Redis does repeatedly to find keys to evict.
- Primary operation: Finding a key to evict based on policy (LRU, LFU, or random).
- How many times: Once per eviction event, but may scan or update metadata for many keys over time.
As the number of keys grows, how much work does eviction take?
| Input Size (n keys) | Approx. Operations |
|---|---|
| 10 | Few operations, quick eviction |
| 100 | Still fast, small overhead per eviction |
| 1000 | Operations grow slowly, eviction still efficient |
Pattern observation: Redis uses smart data structures so eviction time grows slowly, not linearly with keys.
Time Complexity: O(1)
This means eviction takes about the same time no matter how many keys are stored.
[X] Wrong: "Eviction time grows a lot as the database gets bigger because it must check every key."
[OK] Correct: Redis keeps track of usage info efficiently, so it does not scan all keys each time.
Understanding how eviction policies work and their time cost shows you know how Redis handles memory smartly.
What if Redis used a simple list to track usage instead of efficient data structures? How would the time complexity change?