Bird
0
0

Given this code snippet, what will be the output after two follow-up questions?

medium📝 Predict Output Q4 of 15
LangChain - Conversational RAG
Given this code snippet, what will be the output after two follow-up questions? from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() class MockLLM: def invoke(self, prompt): last_human = prompt.split('Human:')[-1].split('\n')[0].strip() return f"Answer to: {last_human}" llm = MockLLM() chain = ConversationChain(llm=llm, memory=memory) print(chain.run("What is AI?")) print(chain.run("How does it learn?"))
AAnswer to: What is AI? Answer to: How does it learn?
BAnswer to: What is AI? Answer to: What is AI? How does it learn?
CAnswer to: What is AI? How does it learn? Answer to: How does it learn?
DError: memory not supported with lambda llm
Step-by-Step Solution
Solution:
  1. Step 1: Understand the mock LLM behavior

    The mock LLM extracts only the current input question from the formatted prompt.
  2. Step 2: Check memory usage

    Memory stores conversation history and chain includes it in prompt, but mock LLM ignores history by parsing only current question, so output matches input question only.
  3. Final Answer:

    Answer to: What is AI? Answer to: How does it learn? -> Option A
  4. Quick Check:

    Mock LLM ignores history, outputs input question only [OK]
Quick Trick: Mock LLM ignores history, outputs question only [OK]
Common Mistakes:
  • Assuming memory content is included in output
  • Expecting concatenated questions in output
  • Thinking mock LLM causes error with memory

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More LangChain Quizzes