0
0
LangChainframework~20 mins

Question reformulation with history in LangChain - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LangChain Question Reformulation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
component_behavior
intermediate
2:00remaining
What is the output of this LangChain question reformulation code?
Given the following LangChain code snippet that reformulates a follow-up question using chat history, what will be the output when the input question is "What is the capital?" and the chat history contains one previous question-answer pair?
LangChain
from langchain.chains import TransformChain
from langchain.prompts import PromptTemplate

prompt_template = """
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
prompt = PromptTemplate(input_variables=["chat_history", "question"], template=prompt_template)

chain = TransformChain(input_variables=["chat_history", "question"], output_variables=["rephrased_question"], transform=lambda inputs: {"rephrased_question": f"Standalone: {inputs['question']}"})

inputs = {"chat_history": "Q: Who is the president? A: Joe Biden.", "question": "What is the capital?"}
output = chain.run(inputs)
print(output)
A{"rephrased_question": "What is the capital?"}
B{"rephrased_question": "Standalone: What is the capital?"}
CSyntaxError due to missing colon in lambda
DKeyError because 'question' key is missing in inputs
Attempts:
2 left
💡 Hint
Look at how the lambda function formats the output dictionary using the input question.
📝 Syntax
intermediate
2:00remaining
Which option causes a syntax error in this LangChain prompt template?
Consider the following prompt template string for question reformulation. Which option contains a syntax error that will prevent the PromptTemplate from being created?
LangChain
prompt_template = """
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
Aprompt = PromptTemplate(input_variables=["chat_history", "question"], template=prompt_template)
Bprompt = PromptTemplate(input_variables=["chat_history", "question"], template=prompt_template + "\n")
Cprompt = PromptTemplate(input_variables=["chat_history", "question"], template=prompt_template[:-1])
Dprompt = PromptTemplate(input_variables=["chat_history", "question"], template=prompt_template.replace('{question}', '{follow_up}'))
Attempts:
2 left
💡 Hint
Check if the input variables match the placeholders in the template string.
state_output
advanced
2:00remaining
What is the value of 'rephrased_question' after running this LangChain chain?
Given a LangChain TransformChain that reformulates a question by appending chat history, what is the value of 'rephrased_question' after running with the inputs below?
LangChain
from langchain.chains import TransformChain

chain = TransformChain(
    input_variables=["chat_history", "question"],
    output_variables=["rephrased_question"],
    transform=lambda inputs: {"rephrased_question": f"{inputs['chat_history']} Then ask: {inputs['question']}"}
)

inputs = {"chat_history": "Q: Who won the game? A: Team A.", "question": "What was the score?"}
output = chain.run(inputs)
rephrased = output["rephrased_question"]
A"Q: Who won the game? A: Team A. Then ask: What was the score?"
B"Then ask: What was the score? Q: Who won the game? A: Team A."
C"Q: Who won the game? A: Team A. What was the score?"
DKeyError because 'rephrased_question' is not in output
Attempts:
2 left
💡 Hint
Look at how the lambda formats the string using chat_history and question keys.
🔧 Debug
advanced
2:00remaining
Which option causes a runtime error when running this LangChain chain?
Given this LangChain TransformChain code, which option will cause a runtime error when calling chain.run with the provided inputs?
LangChain
from langchain.chains import TransformChain

chain = TransformChain(
    input_variables=["chat_history", "question"],
    output_variables=["rephrased_question"],
    transform=lambda inputs: {"rephrased_question": inputs["question"].upper()}
)

inputs = {"chat_history": "Previous Q&A", "question": "hello"}
Achain.run(inputs)
Bchain.run({"chat_history": "Previous Q&A", "question": None})
Cchain.run({"chat_history": "Previous Q&A"})
Dchain.run({"question": "hello"})
Attempts:
2 left
💡 Hint
Check which inputs dictionary is missing required keys.
🧠 Conceptual
expert
3:00remaining
How does LangChain's question reformulation with history improve conversational AI?
Why is it important to reformulate follow-up questions into standalone questions using chat history in LangChain conversational chains?
AIt allows the language model to understand the context without needing the entire conversation history, improving accuracy and relevance.
BIt reduces the number of tokens sent to the model by removing all previous conversation context.
CIt enables the model to generate questions instead of answers, improving engagement.
DIt stores the entire conversation history in a database for later retrieval.
Attempts:
2 left
💡 Hint
Think about how standalone questions help the model understand context better.