0
0
LangChainframework~10 mins

Handling follow-up questions in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Handling follow-up questions
User asks initial question
LangChain processes question
Generate answer
User asks follow-up question
LangChain uses context + follow-up
Generate follow-up answer
Repeat or end
This flow shows how LangChain handles an initial question, then uses context to answer follow-up questions.
Execution Sample
LangChain
from langchain.chains import ConversationChain
from langchain.llms import OpenAI

conversation = ConversationChain(llm=OpenAI())
print(conversation.predict(input="Hello!"))
print(conversation.predict(input="Can you tell me more?"))
This code creates a conversation chain that remembers context to answer follow-up questions.
Execution Table
StepInputContext BeforeActionContext AfterOutput
1"Hello!"{}Process input, generate answer{"history": ["Hello!"]}"Hi! How can I help you today?"
2"Can you tell me more?"{"history": ["Hello!"]}Use history + input to generate follow-up answer{"history": ["Hello!", "Can you tell me more?"]}"Sure! What would you like to know more about?"
3"Thanks!"{"history": ["Hello!", "Can you tell me more?"]}Add input to context, generate closing answer{"history": ["Hello!", "Can you tell me more?", "Thanks!"]}"You're welcome! Let me know if you have more questions."
4"Bye"{"history": ["Hello!", "Can you tell me more?", "Thanks!"]}Add input, generate farewell{"history": ["Hello!", "Can you tell me more?", "Thanks!", "Bye"]}"Goodbye! Have a great day!"
5N/A{"history": ["Hello!", "Can you tell me more?", "Thanks!", "Bye"]}No more input, end conversation{"history": ["Hello!", "Can you tell me more?", "Thanks!", "Bye"]}Conversation ended
💡 Conversation ends when user stops sending input or says goodbye.
Variable Tracker
VariableStartAfter Step 1After Step 2After Step 3After Step 4Final
history[]["Hello!"]["Hello!", "Can you tell me more?"]["Hello!", "Can you tell me more?", "Thanks!"]["Hello!", "Can you tell me more?", "Thanks!", "Bye"]["Hello!", "Can you tell me more?", "Thanks!", "Bye"]
Key Moments - 3 Insights
How does LangChain remember previous questions?
LangChain stores previous inputs in the 'history' context (see execution_table steps 1-4), so it can use them to understand follow-up questions.
What happens if the user asks a follow-up question without context?
LangChain uses the stored 'history' to provide context. Without history, it treats the input as a new question (see step 1).
Why does the context grow after each input?
Each user input is added to 'history' to keep track of the conversation flow, enabling better follow-up answers (see variable_tracker).
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 2. What is the context before processing the input?
A{}
B{"history": ["Can you tell me more?"]}
C{"history": ["Hello!"]}
D{"history": ["Hello!", "Can you tell me more?"]}
💡 Hint
Check the 'Context Before' column at step 2 in the execution_table.
At which step does the conversation end according to the execution_table?
AStep 4
BStep 5
CStep 3
DStep 2
💡 Hint
Look for the step where the output says 'Conversation ended'.
If the user sends a new question after 'Bye', how does the 'history' variable change?
AIt adds the new question to the existing history
BIt resets to empty []
CIt removes 'Bye' from history
DIt duplicates the last input
💡 Hint
Refer to the variable_tracker showing how 'history' grows after each input.
Concept Snapshot
Handling follow-up questions in LangChain:
- Use ConversationChain to keep context.
- Each input updates conversation history.
- Follow-up questions use previous context.
- Context stored in 'history' variable.
- Conversation ends when no input or user says goodbye.
Full Transcript
This visual execution shows how LangChain handles follow-up questions by keeping a conversation history. When a user asks an initial question, LangChain processes it and stores it in the history. For follow-up questions, LangChain uses this stored history to understand context and generate relevant answers. Each user input updates the history, allowing the conversation to flow naturally. The conversation ends when the user stops sending inputs or says goodbye. This approach helps LangChain remember what was said before and respond appropriately to follow-ups.