0
0
Redisquery~5 mins

Eviction policies (LRU, LFU, random) in Redis - Time & Space Complexity

Choose your learning style9 modes available
Time Complexity: Eviction policies (LRU, LFU, random)
O(1)
Understanding Time Complexity

When Redis runs out of memory, it removes some data using eviction policies.

We want to know how the time to pick which data to remove grows as the data size grows.

Scenario Under Consideration

Analyze the time complexity of eviction policy operations in Redis.


// Example: LRU eviction check
if (memory_limit_reached) {
  key_to_evict = find_least_recently_used_key();
  delete_key(key_to_evict);
}

// For LFU, Redis updates usage counters and evicts least frequently used keys
// Random eviction picks a random key to delete

This code decides which key to remove when memory is full, using different policies.

Identify Repeating Operations

Look at what Redis does repeatedly to find keys to evict.

  • Primary operation: Finding a key to evict based on policy (LRU, LFU, or random).
  • How many times: Once per eviction event, but may scan or update metadata for many keys over time.
How Execution Grows With Input

As the number of keys grows, how much work does eviction take?

Input Size (n keys)Approx. Operations
10Few operations, quick eviction
100Still fast, small overhead per eviction
1000Operations grow slowly, eviction still efficient

Pattern observation: Redis uses smart data structures so eviction time grows slowly, not linearly with keys.

Final Time Complexity

Time Complexity: O(1)

This means eviction takes about the same time no matter how many keys are stored.

Common Mistake

[X] Wrong: "Eviction time grows a lot as the database gets bigger because it must check every key."

[OK] Correct: Redis keeps track of usage info efficiently, so it does not scan all keys each time.

Interview Connect

Understanding how eviction policies work and their time cost shows you know how Redis handles memory smartly.

Self-Check

What if Redis used a simple list to track usage instead of efficient data structures? How would the time complexity change?