0
0
LangchainHow-ToBeginner ยท 3 min read

How to Use Conversational Retrieval Chain in Langchain

Use the ConversationalRetrievalChain class in Langchain by providing a language model and a retriever to handle context-aware question answering. This chain manages conversation history and retrieves relevant documents to answer user queries interactively.
๐Ÿ“

Syntax

The ConversationalRetrievalChain requires two main parts: a language model (llm) and a retriever (retriever). The retriever fetches relevant documents based on the conversation, and the language model generates answers using those documents and chat history.

You create the chain with ConversationalRetrievalChain.from_llm(llm, retriever). Then call chain.run({"question": question, "chat_history": chat_history}) to get answers.

python
from langchain.chains import ConversationalRetrievalChain

# llm: language model instance
# retriever: retriever instance
chain = ConversationalRetrievalChain.from_llm(llm, retriever)

# Run the chain with a question and chat history
result = chain.run({
    "question": "Your question here",
    "chat_history": []  # list of (question, answer) tuples
})
๐Ÿ’ป

Example

This example shows how to set up a conversational retrieval chain using OpenAI's GPT-4 model and a vector store retriever. It demonstrates asking a question with an empty chat history and getting a context-aware answer.

python
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Load documents and create embeddings
loader = TextLoader("example.txt")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
doc_chunks = splitter.split_documents(docs)

embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(doc_chunks, embeddings)
retriever = vectorstore.as_retriever()

# Initialize language model
llm = ChatOpenAI(model_name="gpt-4", temperature=0)

# Create conversational retrieval chain
chain = ConversationalRetrievalChain.from_llm(llm, retriever)

# Example question and empty chat history
question = "What is Langchain used for?"
chat_history = []

# Run the chain
result = chain.run({"question": question, "chat_history": chat_history})
print(result)
Output
Langchain is used to build applications with language models by combining them with data sources and tools, enabling conversational and retrieval-augmented tasks.
โš ๏ธ

Common Pitfalls

  • Not providing chat history: The chain needs previous conversation turns to maintain context; passing an empty or missing chat_history can cause less relevant answers.
  • Incorrect retriever setup: The retriever must be properly connected to a vector store or document source; otherwise, no relevant documents will be found.
  • Using incompatible language models: Ensure the language model supports chat-based input/output, like OpenAI's chat models.
python
from langchain.chains import ConversationalRetrievalChain

# Wrong: missing chat_history
result_wrong = chain.run({"question": "What is Langchain?"})  # This may error or give poor context

# Right: include chat_history as list of tuples
result_right = chain.run({"question": "What is Langchain?", "chat_history": []})
๐Ÿ“Š

Quick Reference

  • Import: from langchain.chains import ConversationalRetrievalChain
  • Create chain: ConversationalRetrievalChain.from_llm(llm, retriever)
  • Run chain: chain.run({"question": str, "chat_history": list})
  • Chat history: List of (question, answer) tuples to keep context
  • Retriever: Must be connected to your document source
โœ…

Key Takeaways

Use ConversationalRetrievalChain with a chat-capable language model and a retriever for context-aware Q&A.
Always provide chat history as a list of previous question-answer pairs to maintain conversation flow.
Ensure your retriever is properly set up with relevant documents for accurate retrieval.
Call chain.run() with a dictionary containing 'question' and 'chat_history' keys.
Avoid missing chat history or incompatible models to prevent poor or failed responses.