Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to add the retrieved context to the prompt before sending it to the LLM.
Prompt Engineering / GenAI
prompt = "Answer the question based on the context: " + [1] + "\nQuestion: " + question
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Adding the question twice instead of the context.
Using the LLM response before generating it.
✗ Incorrect
The retrieved context should be added to the prompt so the LLM can use it to answer the question.
2fill in blank
mediumComplete the code to combine the retrieved context and the user question into a single input for the LLM.
Prompt Engineering / GenAI
llm_input = f"Context: [1]\nQuestion: {question}"
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using the user input instead of the retrieved context.
Forgetting to include the context in the input string.
✗ Incorrect
The LLM input should include the retrieved context so it can generate an informed answer.
3fill in blank
hardFix the error in the code that sends the combined context and question to the LLM for completion.
Prompt Engineering / GenAI
response = llm.complete(prompt=[1]) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing only the question or only the context to the LLM.
Using a variable that does not contain the full prompt.
✗ Incorrect
The LLM expects the combined input (context and question) as the prompt, which is stored in llm_input.
4fill in blank
hardFill both blanks to create a dictionary that maps each document to its text and filters documents with length greater than 100.
Prompt Engineering / GenAI
filtered_docs = {doc.id: doc.[1] for doc in documents if len(doc.[2]) > 100} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using different attributes for the dictionary value and length check.
Using metadata or summary which may not contain full text.
✗ Incorrect
The document's text attribute holds the content, and filtering is done by checking the length of the text.
5fill in blank
hardFill all three blanks to create a prompt dictionary with keys 'context', 'question', and 'max_tokens' for the LLM call.
Prompt Engineering / GenAI
prompt_data = {"context": [1], "question": [2], "max_tokens": [3] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using the LLM response instead of the question or context.
Setting max_tokens to a string instead of a number.
✗ Incorrect
The prompt dictionary should include the retrieved context, the user's question, and a token limit for the LLM.