LangChain uses memory components to keep track of previous conversation turns. This memory is passed along with the new question to the language model, allowing it to understand the context and answer follow-up questions properly.
from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() memory.save_context({'input': 'What is AI?'}, {'output': 'AI is artificial intelligence.'}) memory.save_context({'input': 'Who created AI?'}, {'output': 'AI was created by many researchers.'}) print(memory.load_memory_variables({})['chat_history'])
The ConversationBufferMemory stores the conversation as a string buffer with clear labels for human input and AI output. It keeps all previous turns in order.
from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() chain = ConversationChain(llm=llm, memory=memory) response = chain.run('What is LangChain?')
Option A is missing a comma between the llm and memory arguments, causing a syntax error.
If the memory object is not the same instance or not properly passed, the chain loses previous conversation context, causing follow-up questions to fail.
Memory components store past conversation turns so the language model can generate answers that consider what was said before, making the conversation flow naturally.