Imagine many users request the same data at the same time, but the cache for that data has expired. What causes a cache stampede in this scenario?
Think about what happens when many requests try to get data that is not in the cache.
A cache stampede happens when many requests miss the cache at the same time and all try to get the data from the database, causing a sudden load spike.
Choose the architecture pattern that ensures only one request fetches fresh data from the database while others wait for the cache to be updated.
Think about how to coordinate requests so only one updates the cache at a time.
Using a distributed lock ensures only one request refreshes the cache, preventing multiple database hits.
Consider a cache system that refreshes data before it expires. What is the main benefit of this approach?
Think about how refreshing cache early affects request timing.
Early recomputation refreshes cache before expiration, so requests rarely see expired cache and avoid stampedes.
Request coalescing groups multiple requests for the same data to a single database fetch. What is a downside of this approach?
Consider what happens to requests that wait for others to finish fetching data.
Request coalescing reduces load but can increase wait time for some requests, causing higher latency.
If 10,000 users request the same expired cache key at the same time, and each cache miss triggers one database query, what is the maximum number of database queries generated?
Think about how many requests miss the cache and query the database simultaneously.
Without prevention, all 10,000 requests miss the cache and each triggers a database query, causing 10,000 queries.