0
0
Prompt Engineering / GenAIml~20 mins

Caching strategies for LLMs in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Caching Master for LLMs
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding Cache Hit in LLMs
In the context of caching strategies for large language models (LLMs), what does a cache hit mean?
AThe requested output is found in the cache, so the model can reuse it without recomputing.
BThe model fails to find any relevant data in the cache and must query an external database.
CThe model generates a new response from scratch without using cached data.
DThe cache is full and cannot store any more outputs.
Attempts:
2 left
💡 Hint
Think about what happens when the model can reuse previous results quickly.
Model Choice
intermediate
2:00remaining
Choosing a Cache Type for LLMs
Which caching strategy is best suited for storing frequently requested LLM outputs to reduce latency?
ALeast Recently Used (LRU) cache, which evicts the least recently accessed outputs when full.
BWrite-back cache, where outputs are stored only in cache and written back later to main storage.
CWrite-through cache, where every output is immediately written to the main storage and cache.
DRandom replacement cache, which evicts random outputs when the cache is full.
Attempts:
2 left
💡 Hint
Think about which method keeps the most useful outputs available.
Metrics
advanced
1:30remaining
Evaluating Cache Effectiveness
Given an LLM caching system, which metric best measures how often the cache successfully provides outputs without recomputation?
AThroughput, the number of outputs generated per second.
BLatency, the time taken to generate outputs from scratch.
CCache miss rate, the percentage of requests not found in cache.
DCache hit rate, the percentage of requests found in cache.
Attempts:
2 left
💡 Hint
This metric shows how often the cache helps avoid extra work.
🔧 Debug
advanced
2:00remaining
Identifying Cache Invalidation Issue
An LLM caching system returns outdated responses after the underlying model is updated. What is the most likely cause?
AThe model is overfitting to cached outputs.
BCache hit rate is too low, causing frequent recomputations.
CCache invalidation is not properly implemented, so old outputs remain in cache.
DThe cache size is too large, causing slow lookups.
Attempts:
2 left
💡 Hint
Think about what happens when cached data does not reflect model changes.
Predict Output
expert
2:30remaining
Output of LRU Cache Simulation Code
What is the output of this Python code simulating an LRU cache for LLM outputs?
Prompt Engineering / GenAI
from collections import OrderedDict

class LRUCache:
    def __init__(self, capacity):
        self.cache = OrderedDict()
        self.capacity = capacity

    def get(self, key):
        if key not in self.cache:
            return -1
        self.cache.move_to_end(key)
        return self.cache[key]

    def put(self, key, value):
        if key in self.cache:
            self.cache.move_to_end(key)
        self.cache[key] = value
        if len(self.cache) > self.capacity:
            self.cache.popitem(last=False)

cache = LRUCache(2)
cache.put('a', 'output1')
cache.put('b', 'output2')
print(cache.get('a'))
cache.put('c', 'output3')
print(cache.get('b'))
print(cache.get('c'))
A
-1
output2
output3
B
output1
-1
output3
C
output1
output2
output3
D
-1
-1
output3
Attempts:
2 left
💡 Hint
Remember that the cache capacity is 2 and least recently used items get removed.