0
0
Prompt Engineering / GenAIml~20 mins

LLM wrappers in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LLM Wrapper Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
What is the primary purpose of an LLM wrapper?

Large Language Models (LLMs) are powerful but complex. Why do developers use wrappers around LLMs?

ATo simplify interaction by providing easy-to-use functions and manage inputs and outputs
BTo increase the size of the LLM model for better accuracy
CTo replace the LLM with a smaller model for faster responses
DTo convert text data into images for visual processing
Attempts:
2 left
💡 Hint

Think about how wrappers help users work with complex tools more easily.

Predict Output
intermediate
1:30remaining
Output of LLM wrapper prompt formatting

Given this Python code using a simple LLM wrapper, what is the printed output?

Prompt Engineering / GenAI
class SimpleLLMWrapper:
    def __init__(self, model_name):
        self.model_name = model_name
    def generate(self, prompt):
        return f"Model {self.model_name} received: {prompt}"  

wrapper = SimpleLLMWrapper('TestModel')
response = wrapper.generate('Hello world')
print(response)
AHello world
BModel TestModel received: Hello world
CModel received: TestModel Hello world
DError: generate method missing return
Attempts:
2 left
💡 Hint

Look at how the generate method formats the string.

Model Choice
advanced
2:00remaining
Choosing the best LLM wrapper for caching responses

You want to build an application that calls an LLM many times with repeated prompts. Which wrapper feature is most important to improve speed?

AA wrapper that disables all logging to save disk space
BA wrapper that increases the model size for better accuracy
CA wrapper that converts text prompts into images before sending
DA wrapper that supports caching previous responses to avoid repeated calls
Attempts:
2 left
💡 Hint

Think about how to avoid repeating expensive operations.

Hyperparameter
advanced
1:30remaining
Effect of temperature parameter in LLM wrappers

In an LLM wrapper, what does increasing the temperature parameter usually do to the generated text?

AMakes the output more random and creative
BMakes the output shorter and more precise
CAlways produces the same output for the same prompt
DCauses the model to ignore the prompt
Attempts:
2 left
💡 Hint

Think about randomness and creativity in text generation.

🔧 Debug
expert
2:00remaining
Why does this LLM wrapper code raise a TypeError?

Examine the code below. Why does it raise a AttributeError when calling wrapper.generate(123)?

Prompt Engineering / GenAI
class LLMWrapper:
    def generate(self, prompt: str) -> str:
        return f"Response to: {prompt.upper()}"

wrapper = LLMWrapper()
output = wrapper.generate(123)
ABecause the class LLMWrapper is not instantiated properly
BBecause the generate method is missing a return statement
CBecause 123 is an integer and integers do not have the method <code>upper()</code>
DBecause the prompt argument must be a list, not a string
Attempts:
2 left
💡 Hint

Check the type of the argument and what methods it supports.