0
0
LangChainframework~5 mins

Why the RAG chain connects retrieval to generation in LangChain

Choose your learning style9 modes available
Introduction

The RAG chain helps computers find useful information first, then use that information to create helpful answers. It connects searching and writing in one smooth step.

When you want a computer to answer questions using a large set of documents.
When you need to combine facts from many sources before writing a summary.
When you want to improve chatbot answers by letting it look up information first.
When you want to generate reports based on specific data retrieved from a database.
Syntax
LangChain
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI

retrieval_qa_chain = RetrievalQA.from_chain_type(
    llm=OpenAI(),
    chain_type="stuff",
    retriever=my_retriever
)

retriever is the part that finds relevant documents.

llm is the language model that generates answers using the retrieved info.

Examples
This sets up the chain with a language model that gives consistent answers (temperature=0).
LangChain
retrieval_qa_chain = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=my_retriever
)
This uses a different method to combine information from many documents before generating the answer.
LangChain
retrieval_qa_chain = RetrievalQA.from_chain_type(
    llm=OpenAI(),
    chain_type="map_reduce",
    retriever=my_retriever
)
Sample Program

This program creates a simple RAG chain that first finds documents about landmarks, then uses a language model to answer a question about the Eiffel Tower.

LangChain
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings

# Sample documents
documents = [
    "The Eiffel Tower is in Paris.",
    "The Great Wall is in China.",
    "The Colosseum is in Rome."
]

# Create embeddings for documents
embeddings = OpenAIEmbeddings()
vector_store = FAISS.from_texts(documents, embeddings)

# Create retriever from vector store
retriever = vector_store.as_retriever()

# Create the RAG chain
rag_chain = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=retriever
)

# Ask a question
question = "Where is the Eiffel Tower located?"
answer = rag_chain.run(question)
print(f"Question: {question}")
print(f"Answer: {answer}")
OutputSuccess
Important Notes

The RAG chain first retrieves relevant documents, then generates answers using those documents.

This approach helps the model give more accurate and fact-based answers.

Common mistake: skipping retrieval and asking the model directly, which can lead to made-up answers.

Summary

The RAG chain links searching for information and writing answers together.

This makes answers more accurate and useful.

It is helpful when working with lots of documents or data.