Imagine a busy coffee shop where many customers order the same drink. Instead of making each drink from scratch every time, the barista prepares a batch in advance. How does this idea relate to caching in REST APIs?
Choose the best explanation for why caching reduces server load.
Think about how reusing prepared data saves time and effort.
Caching saves server resources by reusing stored responses for repeated requests. This avoids repeating expensive operations like database queries or computations, thus reducing server load and improving response time.
Consider this Python code simulating a simple cache for API responses. What is the output?
cache = {}
def get_data(key):
if key in cache:
return f"Cache hit: {cache[key]}"
else:
data = f"Data for {key}"
cache[key] = data
return f"Cache miss: {data}"
print(get_data('user1'))
print(get_data('user1'))
print(get_data('user2'))Check when the cache is empty and when it has stored data.
The first call for 'user1' is a miss because the cache is empty. It stores the data. The second call for 'user1' hits the cache and returns stored data. The call for 'user2' is a miss because it's new.
This code tries to cache API responses but does not reduce server load as expected. What is the main problem?
cache = {}
def fetch_data(key):
if key not in cache:
data = f"Data for {key}"
cache[key] = data
return cache[key]
print(fetch_data('item1'))
print(fetch_data('item1'))Look at what happens when the key is already in the cache.
If the key is in cache, the 'if' block is skipped, so 'data' is never assigned. Then 'cache[key] = data' tries to use 'data' which is undefined, causing an error.
Which of the following Python code snippets correctly implements a cache to reduce server load?
Check which code stores and returns cached data properly.
Option C correctly checks if the key is in cache and returns it. If not, it creates the response, stores it, and returns it. Others have errors like wrong data structure, missing return, or overwriting cache incorrectly.
A server receives 1000 identical requests per minute. Without caching, each request takes 50ms of server processing time. With caching, 80% of requests are served from cache instantly (0ms processing), and 20% require full processing.
What is the total server processing time per minute with caching?
Calculate time for cached and non-cached requests separately, then add.
20% of 1000 requests = 200 requests need processing. Each takes 50ms, so 200 * 50ms = 10000ms. Cached requests take 0ms. Total = 10000ms per minute.