0
0
HLDsystem_design~25 mins

Cache eviction policies (LRU, LFU, TTL) in HLD - System Design Exercise

Choose your learning style9 modes available
Design: Cache Eviction Policy System
Design focuses on the eviction policy mechanism within a cache system. Cache storage, network communication, and persistence are out of scope.
Functional Requirements
FR1: Support caching of data to improve read performance
FR2: Implement eviction policies to remove stale or less useful data when cache is full
FR3: Support three eviction policies: Least Recently Used (LRU), Least Frequently Used (LFU), and Time-To-Live (TTL)
FR4: Allow switching between eviction policies dynamically
FR5: Ensure eviction decisions are efficient to maintain low latency
FR6: Provide metrics on cache hits, misses, and evictions
Non-Functional Requirements
NFR1: Cache size limited to 1 million entries
NFR2: Eviction decision latency must be under 5 milliseconds
NFR3: System must handle 10,000 cache requests per second
NFR4: Availability target of 99.9% uptime
NFR5: Memory usage must be optimized to avoid excessive overhead
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
❓ Question 6
Key Components
Cache storage data structure
Eviction policy manager
Access tracking mechanism (timestamps, counters)
TTL timer or scheduler
Metrics collector
API to switch eviction policies
Design Patterns
LRU implemented with doubly linked list and hashmap
LFU with frequency counters and min-heap or linked lists
TTL using priority queue or timing wheel
Decorator pattern to add eviction logic to cache
Observer pattern for metrics collection
Reference Architecture
  +---------------------+
  |   Client Requests   |
  +----------+----------+
             |
             v
  +---------------------+
  |   Cache Interface   |
  +----------+----------+
             |
             v
  +---------------------+       +---------------------+
  | Eviction Policy Mgr |<----->|  Metrics Collector   |
  +----------+----------+       +---------------------+
             |
             v
  +---------------------+
  |   Cache Storage     |
  | (HashMap + LRU/LFU) |
  +---------------------+
             |
             v
  +---------------------+
  |    TTL Scheduler    |
  +---------------------+
Components
Cache Interface
In-memory data structure
Handles client cache requests and forwards to eviction manager
Eviction Policy Manager
Custom logic with data structures
Implements LRU, LFU, TTL eviction policies and manages switching
Cache Storage
HashMap with auxiliary data structures
Stores cached entries and metadata for eviction
TTL Scheduler
Priority queue or timing wheel
Tracks expiration times and triggers TTL evictions
Metrics Collector
Counters and logging
Collects cache hits, misses, and eviction statistics
Request Flow
1. Client sends a cache read or write request to Cache Interface.
2. Cache Interface checks Cache Storage for the key.
3. If key found, update access metadata (timestamp or frequency) in Eviction Policy Manager and return value.
4. If key not found, fetch from source (out of scope), insert into Cache Storage.
5. Eviction Policy Manager checks if cache size exceeds limit.
6. If full, Eviction Policy Manager selects entries to evict based on current policy (LRU, LFU, or TTL).
7. TTL Scheduler triggers eviction for expired entries asynchronously.
8. Metrics Collector updates hit/miss/eviction counters after each operation.
9. Client receives response with cached data or miss indication.
Database Schema
Not applicable as this is an in-memory cache system. Key entities are cache entries with fields: key, value, last_access_time, access_frequency, expiration_time. Relationships are managed via data structures: hashmap for key lookup, linked list or heap for eviction ordering.
Scaling Discussion
Bottlenecks
Eviction decision latency increases with cache size
Memory overhead from tracking access metadata
TTL scheduler overhead with many expiring entries
Metrics collection causing contention under high load
Solutions
Use efficient data structures like doubly linked lists for LRU and frequency lists for LFU to keep eviction O(1)
Limit metadata size per entry and use approximate counters if needed
Implement hierarchical timing wheels or bucketed TTL to reduce scheduler overhead
Batch metrics updates asynchronously to reduce contention
Interview Tips
Time: Spend 10 minutes clarifying requirements and constraints, 20 minutes designing the eviction policies and data structures, 10 minutes discussing scaling and trade-offs, and 5 minutes summarizing.
Explain differences and use cases for LRU, LFU, and TTL
Describe data structures used for each eviction policy
Discuss how to switch policies dynamically
Highlight performance considerations and latency targets
Mention how metrics help monitor cache health
Address scaling challenges and practical solutions