0
0
Rest APIprogramming~20 mins

Why caching reduces server load in Rest API - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Caching Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does caching reduce server load?

Imagine a busy coffee shop where many customers order the same drink. Instead of making each drink from scratch every time, the barista prepares a batch in advance. How does this idea relate to caching in REST APIs?

Choose the best explanation for why caching reduces server load.

ACaching stores responses so the server can skip processing repeated requests, reducing work and speeding up responses.
BCaching increases server load by storing extra data and making the server check the cache for every request.
CCaching forces the server to process every request twice, once for the cache and once for the client.
DCaching deletes old data from the server, which reduces the amount of data the server holds.
Attempts:
2 left
💡 Hint

Think about how reusing prepared data saves time and effort.

Predict Output
intermediate
2:00remaining
Output of caching simulation code

Consider this Python code simulating a simple cache for API responses. What is the output?

Rest API
cache = {}
def get_data(key):
    if key in cache:
        return f"Cache hit: {cache[key]}"
    else:
        data = f"Data for {key}"
        cache[key] = data
        return f"Cache miss: {data}"

print(get_data('user1'))
print(get_data('user1'))
print(get_data('user2'))
A
Cache hit: Data for user1
Cache miss: Data for user1
Cache hit: Data for user2
B
Cache hit: Data for user1
Cache hit: Data for user1
Cache hit: Data for user2
C
Cache miss: Data for user1
Cache miss: Data for user1
Cache miss: Data for user2
D
Cache miss: Data for user1
Cache hit: Data for user1
Cache miss: Data for user2
Attempts:
2 left
💡 Hint

Check when the cache is empty and when it has stored data.

🔧 Debug
advanced
2:00remaining
Identify the error causing cache to not reduce server load

This code tries to cache API responses but does not reduce server load as expected. What is the main problem?

Rest API
cache = {}
def fetch_data(key):
    if key not in cache:
        data = f"Data for {key}"
    cache[key] = data
    return cache[key]

print(fetch_data('item1'))
print(fetch_data('item1'))
AVariable 'data' is used before assignment when key is in cache, causing a runtime error.
BCache is never updated because the condition is wrong, so it always fetches new data.
CThe function always returns None because it lacks a return statement.
DThe cache dictionary is cleared after each call, so caching does not work.
Attempts:
2 left
💡 Hint

Look at what happens when the key is already in the cache.

📝 Syntax
advanced
2:00remaining
Which code snippet correctly implements caching?

Which of the following Python code snippets correctly implements a cache to reduce server load?

A
cache = {}
def get_response(key):
    if key not in cache:
        response = f"Response for {key}"
    cache[key] = response
    return cache[key]
B
cache = []
def get_response(key):
    if key in cache:
        return cache[key]
    response = f"Response for {key}"
    cache.append(response)
    return response
C
cache = {}
def get_response(key):
    if key in cache:
        return cache[key]
    response = f"Response for {key}"
    cache[key] = response
    return response
D
cache = {}
def get_response(key):
    response = f"Response for {key}"
    cache[key] = response
    if key in cache:
        return cache[key]
Attempts:
2 left
💡 Hint

Check which code stores and returns cached data properly.

🚀 Application
expert
2:00remaining
Calculate server load reduction with caching

A server receives 1000 identical requests per minute. Without caching, each request takes 50ms of server processing time. With caching, 80% of requests are served from cache instantly (0ms processing), and 20% require full processing.

What is the total server processing time per minute with caching?

A1000 ms
B10000 ms
C50000 ms
D40000 ms
Attempts:
2 left
💡 Hint

Calculate time for cached and non-cached requests separately, then add.