0
0
Prompt Engineering / GenAIml~20 mins

Combining retrieved context with LLM in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Retrieval-LLM Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does retrieved context improve LLM responses?

Imagine you ask a large language model (LLM) a question. How does adding retrieved context from a database help the LLM give better answers?

AIt reduces the LLM's vocabulary size to speed up response time.
BIt provides extra relevant information so the LLM can generate more accurate and specific responses.
CIt replaces the LLM's internal knowledge completely with the retrieved data.
DIt forces the LLM to ignore the question and only use the retrieved context.
Attempts:
2 left
💡 Hint

Think about how having more facts related to your question helps you answer better.

Predict Output
intermediate
1:30remaining
Output of combining retrieved context with LLM prompt

Given the following Python code snippet that combines retrieved context with a prompt for an LLM, what is the printed output?

Prompt Engineering / GenAI
retrieved_context = "The Eiffel Tower is in Paris."
prompt = "Where is the Eiffel Tower located?"
combined_input = f"Context: {retrieved_context}\nQuestion: {prompt}"
print(combined_input)
A
Context: The Eiffel Tower is in Paris.
Question: Where is the Eiffel Tower located?
BContext: The Eiffel Tower is in Paris. Question: Where is the Eiffel Tower located?
C
Context: retrieved_context
Question: prompt
DThe Eiffel Tower is in Paris. Where is the Eiffel Tower located?
Attempts:
2 left
💡 Hint

Look carefully at how the f-string formats the variables with newlines.

Model Choice
advanced
2:30remaining
Best model architecture for combining retrieved context with LLM

Which model architecture is best suited to effectively combine retrieved context with a large language model for question answering?

AA recurrent neural network that processes only the retrieved context without the question.
BA simple feedforward neural network that ignores context and only processes the question.
CA convolutional neural network trained on images unrelated to text.
DA retrieval-augmented transformer that encodes context and question jointly before decoding.
Attempts:
2 left
💡 Hint

Think about architectures designed to handle both context and question together.

Hyperparameter
advanced
2:00remaining
Choosing the number of retrieved documents for context

When combining retrieved context with an LLM, what is a key consideration when choosing how many documents to retrieve?

ARetrieving documents unrelated to the question improves diversity and accuracy.
BRetrieving only one document always guarantees the best answer.
CRetrieving too many documents can overwhelm the model and reduce answer quality due to noise.
DRetrieving zero documents is best because the LLM knows everything already.
Attempts:
2 left
💡 Hint

More context is not always better; think about information overload.

Metrics
expert
3:00remaining
Evaluating combined retrieval and LLM system performance

You have a system that retrieves documents and then uses an LLM to answer questions. Which metric best measures how well the combined system answers questions accurately?

AExact Match (EM) score comparing generated answers to ground truth answers.
BMean Squared Error (MSE) between retrieved document vectors.
CBLEU score comparing retrieved documents to questions.
DTraining loss of the LLM on unrelated text data.
Attempts:
2 left
💡 Hint

Think about metrics that compare generated answers to correct answers.