0
0
Redisquery~15 mins

Eviction policies (LRU, LFU, random) in Redis - Deep Dive

Choose your learning style9 modes available
Overview - Eviction policies (LRU, LFU, random)
What is it?
Eviction policies are rules that decide which data to remove when a database like Redis runs out of memory. They help Redis keep working smoothly by freeing space for new data. Common eviction policies include LRU (Least Recently Used), LFU (Least Frequently Used), and random eviction. Each policy chooses data to remove based on different ideas about what data is less important.
Why it matters
Without eviction policies, Redis would stop accepting new data once memory is full, causing errors or crashes. This would make applications slow or unusable. Eviction policies ensure Redis can keep running by smartly removing less important data, so apps stay fast and reliable even under heavy use.
Where it fits
Before learning eviction policies, you should understand basic Redis data storage and memory limits. After this, you can explore Redis persistence, replication, and performance tuning to build robust applications.
Mental Model
Core Idea
Eviction policies decide which data to remove when memory is full, balancing speed and usefulness to keep Redis running smoothly.
Think of it like...
Imagine a backpack that can only hold so much. When it’s full, you must decide which items to take out to make room for new ones. LRU is like removing the item you haven’t used in a long time, LFU is like removing the item you rarely use, and random is like closing your eyes and picking any item to remove.
┌───────────────┐
│ Redis Memory  │
│   Full?      │
└──────┬────────┘
       │
       ▼
┌─────────────────────────────┐
│ Eviction Policy Chooses Key │
│ ┌───────────────┐           │
│ │ LRU: Least    │           │
│ │ Recently Used │           │
│ ├───────────────┤           │
│ │ LFU: Least    │           │
│ │ Frequently    │           │
│ │ Used          │           │
│ ├───────────────┤           │
│ │ Random: Any   │           │
│ │ Key           │           │
│ └───────────────┘           │
└───────────────┬─────────────┘
                │
                ▼
       ┌─────────────────┐
       │ Remove Key from  │
       │ Memory to Free  │
       │ Space           │
       └─────────────────┘
Build-Up - 7 Steps
1
FoundationWhat is Memory Eviction in Redis
🤔
Concept: Memory eviction is the process Redis uses to remove data when it runs out of memory.
Redis stores data in memory for fast access. But memory is limited. When Redis reaches its memory limit, it must remove some data to make room for new data. This removal process is called eviction. Without eviction, Redis would stop accepting new data and cause errors.
Result
Redis frees up memory space by removing some keys, allowing new data to be stored.
Understanding eviction is key to managing Redis memory and ensuring your application keeps running without crashes.
2
FoundationRedis Memory Limits and maxmemory Setting
🤔
Concept: Redis allows setting a maximum memory limit to control how much RAM it uses.
You can configure Redis with the maxmemory setting to limit how much RAM it can use. When Redis reaches this limit, it triggers eviction policies to remove keys. This setting helps prevent Redis from using too much memory and affecting other parts of your system.
Result
Redis stops growing memory usage beyond the set limit and starts evicting keys.
Knowing how to set maxmemory is essential before choosing an eviction policy.
3
IntermediateLeast Recently Used (LRU) Eviction Policy
🤔Before reading on: do you think LRU removes the oldest data or the data used least often? Commit to your answer.
Concept: LRU removes the keys that have not been accessed for the longest time.
LRU tracks when keys were last accessed. When eviction is needed, it removes the keys that haven’t been used recently. This assumes that data not used for a while is less likely to be needed soon. Redis approximates LRU to keep performance high.
Result
Keys unused for the longest time are removed first, freeing memory.
Understanding LRU helps you predict which data Redis will keep or remove under memory pressure.
4
IntermediateLeast Frequently Used (LFU) Eviction Policy
🤔Before reading on: does LFU remove keys used least recently or least often? Commit to your answer.
Concept: LFU removes keys that have been accessed the fewest times over time.
LFU counts how often each key is accessed. When eviction is needed, Redis removes keys with the lowest usage count. This assumes that rarely used data is less important. LFU is useful when you want to keep frequently accessed data longer, even if it was accessed recently or not.
Result
Keys with the lowest access frequency are removed first.
Knowing LFU helps you manage data that is important because it is used often, not just recently.
5
IntermediateRandom Eviction Policy
🤔
Concept: Random eviction removes any key at random when memory is full.
This policy does not track usage or frequency. It simply picks keys randomly to remove. This is simple and fast but can remove important data by chance. It is useful when you want a very fast eviction without overhead of tracking usage.
Result
Any key can be removed, freeing memory quickly but unpredictably.
Understanding random eviction helps when performance is more important than data importance.
6
AdvancedHow Redis Approximates LRU and LFU
🤔Before reading on: do you think Redis tracks exact usage for all keys or uses an approximation? Commit to your answer.
Concept: Redis uses approximations to track usage efficiently without slowing down performance.
Tracking exact usage for every key would be slow and use extra memory. Redis uses sampling and counters to approximate LRU and LFU. For LRU, it samples a few keys and removes the least recently used among them. For LFU, it uses small counters that decay over time to estimate frequency.
Result
Redis evicts keys based on approximate usage, balancing accuracy and speed.
Knowing Redis uses approximations explains why eviction may not always remove the absolute least used key but keeps performance high.
7
ExpertTrade-offs and Choosing the Right Eviction Policy
🤔Before reading on: which eviction policy do you think is best for caching frequently accessed data? Commit to your answer.
Concept: Each eviction policy has trade-offs in complexity, performance, and data importance preservation.
LRU is good when recent usage predicts future usage. LFU is better when frequency matters more than recency. Random is fastest but least precise. Choosing depends on your data patterns and performance needs. Redis also offers no-eviction mode to avoid removing data but risk errors when memory is full.
Result
Selecting the right policy improves application performance and reliability under memory pressure.
Understanding trade-offs helps you tailor Redis behavior to your specific use case and avoid surprises in production.
Under the Hood
Redis maintains metadata for keys to track usage. For LRU, it stores last access times or uses a clock algorithm with sampling. For LFU, it uses small counters that increment on access and decay over time to avoid stale counts. When memory is full, Redis samples a subset of keys and evicts the one with the lowest usage metric. This sampling avoids scanning all keys, keeping eviction fast.
Why designed this way?
Exact tracking of usage for all keys would slow Redis and increase memory use, defeating its purpose as a fast in-memory store. Sampling and approximations balance accuracy and speed. Random eviction is simplest and fastest but less intelligent. These designs reflect Redis’s goal of high performance with configurable trade-offs.
┌───────────────┐
│ Redis Memory  │
│   Full?      │
└──────┬────────┘
       │
       ▼
┌─────────────────────────────┐
│ Sample N Keys from Memory    │
├─────────────┬───────────────┤
│ LRU: Check │ LFU: Check     │
│ last access│ usage counter  │
│ times      │               │
└─────┬──────┴─────┬─────────┘
      │            │
      ▼            ▼
┌───────────────┐ ┌───────────────┐
│ Find Key with │ │ Find Key with │
│ Lowest LRU    │ │ Lowest LFU    │
│ Value        │ │ Value         │
└─────┬─────────┘ └─────┬─────────┘
      │                 │
      ▼                 ▼
┌─────────────────────────────┐
│ Remove Selected Key from     │
│ Memory to Free Space         │
└─────────────────────────────┘
Myth Busters - 4 Common Misconceptions
Quick: Does LRU always remove the oldest stored key? Commit yes or no.
Common Belief:LRU always removes the oldest key stored in Redis.
Tap to reveal reality
Reality:LRU removes the key that was least recently accessed, not necessarily the oldest stored key.
Why it matters:Confusing age with usage can lead to wrong expectations about which data Redis will evict, causing unexpected cache misses.
Quick: Does LFU count accesses forever without forgetting? Commit yes or no.
Common Belief:LFU counts all accesses forever, so old usage always affects eviction.
Tap to reveal reality
Reality:LFU counters decay over time, so old accesses lose influence, allowing recent usage to matter more.
Why it matters:Without decay, rarely used keys might never get evicted, wasting memory and hurting performance.
Quick: Is random eviction always the worst choice? Commit yes or no.
Common Belief:Random eviction is always bad because it removes important data unpredictably.
Tap to reveal reality
Reality:Random eviction is simple and fast, useful when performance is critical and usage patterns are unpredictable.
Why it matters:Dismissing random eviction outright can miss opportunities for performance gains in some scenarios.
Quick: Does Redis scan all keys to find eviction candidates? Commit yes or no.
Common Belief:Redis scans all keys to find the best eviction candidate.
Tap to reveal reality
Reality:Redis samples a small number of keys to approximate the best candidate, avoiding expensive full scans.
Why it matters:Expecting full scans can lead to wrong assumptions about Redis performance under eviction.
Expert Zone
1
Redis’s LRU and LFU implementations use probabilistic data structures and sampling to keep eviction fast and memory overhead low.
2
LFU counters decay using logarithmic aging, which balances between forgetting old usage and keeping recent access relevant.
3
Eviction policies interact with Redis persistence and replication, affecting data durability and consistency in subtle ways.
When NOT to use
Eviction policies are not suitable when data loss is unacceptable. In such cases, use Redis persistence with no-eviction mode or scale horizontally with sharding. For very large datasets, consider external caching layers or databases designed for disk storage.
Production Patterns
In production, LRU is commonly used for caching to keep recent data. LFU is chosen when frequency matters, like in recommendation systems. Random eviction is used in high-throughput scenarios where eviction overhead must be minimal. Monitoring eviction events and memory usage is critical to tune policies effectively.
Connections
Cache Replacement Algorithms
Eviction policies in Redis are specific examples of cache replacement algorithms used in computer systems.
Understanding general cache replacement helps grasp why LRU and LFU are popular and how they balance hit rate and complexity.
Garbage Collection in Programming Languages
Both eviction policies and garbage collection decide what data to remove to free memory based on usage patterns.
Knowing how garbage collectors work clarifies why Redis uses approximations and sampling to avoid performance hits.
Inventory Management in Retail
Eviction policies are like deciding which products to remove from shelves based on sales frequency or last sale date.
This connection shows how managing limited space and prioritizing items is a common problem across fields.
Common Pitfalls
#1Setting maxmemory without choosing an eviction policy causes Redis to return errors when memory is full.
Wrong approach:maxmemory 100mb # No eviction policy set
Correct approach:maxmemory 100mb maxmemory-policy allkeys-lru
Root cause:Learners forget that maxmemory alone does not tell Redis how to free memory, leading to out-of-memory errors.
#2Using random eviction for critical data caching causes important data to be removed unpredictably.
Wrong approach:maxmemory-policy allkeys-random
Correct approach:maxmemory-policy allkeys-lfu
Root cause:Choosing eviction policy without considering data importance leads to poor cache hit rates.
#3Assuming LFU counts accesses exactly and never forgets, causing confusion when keys are evicted unexpectedly.
Wrong approach:Expecting LFU to keep keys forever if accessed once frequently in the past.
Correct approach:Understand LFU counters decay over time, so recent usage matters more.
Root cause:Misunderstanding LFU counter decay leads to wrong expectations about key eviction timing.
Key Takeaways
Eviction policies in Redis decide which keys to remove when memory is full to keep the system running smoothly.
LRU removes keys not used recently, LFU removes keys used least often, and random eviction removes keys unpredictably.
Redis uses approximations and sampling to track usage efficiently without slowing down performance.
Choosing the right eviction policy depends on your data usage patterns and performance needs.
Misunderstanding eviction behavior can cause unexpected data loss or performance issues in your application.