0
0
LangChainframework~5 mins

Why conversation history improves RAG in LangChain

Choose your learning style9 modes available
Introduction

Conversation history helps RAG (Retrieval-Augmented Generation) by giving context from earlier messages. This makes answers clearer and more relevant.

When building chatbots that remember past user questions
When answering follow-up questions in a conversation
When you want the AI to keep track of a story or topic over time
When users expect personalized or continuous dialogue
When retrieving documents related to earlier parts of a chat
Syntax
LangChain
conversation_history = []

# Add user input and AI response to history
conversation_history.append({'role': 'user', 'content': user_input})
conversation_history.append({'role': 'assistant', 'content': ai_response})

# Use conversation_history as context for retrieval
context = "\n".join([f"{msg['role']}: {msg['content']}" for msg in conversation_history])
retrieved_docs = retriever.get_relevant_documents(context)

# Generate answer using retrieved docs and conversation history
answer = generator.generate_answer(retrieved_docs, conversation_history)

Keep conversation history as a list of messages with roles (user, assistant).

Pass the full history to the retriever and generator to improve context understanding.

Examples
Store simple user and assistant messages in conversation history.
LangChain
conversation_history = []
conversation_history.append({'role': 'user', 'content': 'What is AI?'})
conversation_history.append({'role': 'assistant', 'content': 'AI means artificial intelligence.'})
Use conversation history to find documents and generate a better answer.
LangChain
context = "\n".join([f"{msg['role']}: {msg['content']}" for msg in conversation_history])
retrieved_docs = retriever.get_relevant_documents(context)
answer = generator.generate_answer(retrieved_docs, conversation_history)
Sample Program

This example shows how conversation history is used to retrieve documents and generate an answer with Langchain.

LangChain
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI

# Sample conversation history
conversation_history = [
    {'role': 'user', 'content': 'Who is Albert Einstein?'},
    {'role': 'assistant', 'content': 'Albert Einstein was a physicist known for relativity.'}
]

# Dummy retriever and generator setup (replace with real ones)
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.load_local('faiss_index', embeddings)
retriever = vectorstore.as_retriever()
llm = OpenAI(temperature=0)

# Use conversation history as context
full_query = "\n".join([f"{msg['role']}: {msg['content']}" for msg in conversation_history])
retrieved_docs = retriever.get_relevant_documents(full_query)

# Generate answer
qa_chain = RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
answer = qa_chain.run(full_query)

print('Answer:', answer)
OutputSuccess
Important Notes

Keeping conversation history helps the AI understand what was said before.

Without history, answers may be vague or miss context.

Make sure to manage history size to avoid too much data slowing down retrieval.

Summary

Conversation history provides important context for RAG systems.

It helps retrieve more relevant documents and generate better answers.

Use a structured list of messages to keep track of the dialogue.