0
0
Prompt Engineering / GenAIml~10 mins

Combining retrieved context with LLM in Prompt Engineering / GenAI - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to add the retrieved context to the prompt before sending it to the LLM.

Prompt Engineering / GenAI
prompt = "Answer the question based on the context: " + [1] + "\nQuestion: " + question
Drag options to blanks, or click blank then click option'
Auser_input
Bquestion
Cllm_response
Dretrieved_context
Attempts:
3 left
💡 Hint
Common Mistakes
Adding the question twice instead of the context.
Using the LLM response before generating it.
2fill in blank
medium

Complete the code to combine the retrieved context and the user question into a single input for the LLM.

Prompt Engineering / GenAI
llm_input = f"Context: [1]\nQuestion: {question}"
Drag options to blanks, or click blank then click option'
Auser_input
Bllm_output
Cretrieved_context
Dsystem_message
Attempts:
3 left
💡 Hint
Common Mistakes
Using the user input instead of the retrieved context.
Forgetting to include the context in the input string.
3fill in blank
hard

Fix the error in the code that sends the combined context and question to the LLM for completion.

Prompt Engineering / GenAI
response = llm.complete(prompt=[1])
Drag options to blanks, or click blank then click option'
Allm_input
Bquestion
Cretrieved_context
Duser_input
Attempts:
3 left
💡 Hint
Common Mistakes
Passing only the question or only the context to the LLM.
Using a variable that does not contain the full prompt.
4fill in blank
hard

Fill both blanks to create a dictionary that maps each document to its text and filters documents with length greater than 100.

Prompt Engineering / GenAI
filtered_docs = {doc.id: doc.[1] for doc in documents if len(doc.[2]) > 100}
Drag options to blanks, or click blank then click option'
Atext
Bcontent
Cmetadata
Dsummary
Attempts:
3 left
💡 Hint
Common Mistakes
Using different attributes for the dictionary value and length check.
Using metadata or summary which may not contain full text.
5fill in blank
hard

Fill all three blanks to create a prompt dictionary with keys 'context', 'question', and 'max_tokens' for the LLM call.

Prompt Engineering / GenAI
prompt_data = {"context": [1], "question": [2], "max_tokens": [3]
Drag options to blanks, or click blank then click option'
Aretrieved_context
Buser_question
C150
Dllm_response
Attempts:
3 left
💡 Hint
Common Mistakes
Using the LLM response instead of the question or context.
Setting max_tokens to a string instead of a number.