0
0
LangChainframework~5 mins

Basic RAG chain with LCEL in LangChain

Choose your learning style9 modes available
Introduction

A Basic RAG chain with LCEL helps you find answers by combining your documents with smart AI. It uses retrieval to get info and then generates a clear answer.

You want to answer questions using your own documents or data.
You need quick, relevant answers from a large text collection.
You want to combine search and AI writing for better results.
You want to build a simple chatbot that knows your documents.
You want to learn how retrieval and generation work together.
Syntax
LangChain
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator

# Load documents
loader = TextLoader('mydocs.txt')
docs = loader.load()

# Create vector index
index = VectorstoreIndexCreator().from_documents(docs)

# Create RAG chain
qa = RetrievalQA.from_chain_type(
    llm=OpenAI(),
    chain_type="stuff",
    retriever=index.vectorstore.as_retriever()
)

# Ask a question
answer = qa.run("What is LangChain?")

The RAG chain uses a retriever to find relevant documents.

LCEL means LangChain Expression Language, which helps build chains simply.

Examples
This example sets temperature to 0 for consistent answers.
LangChain
qa = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=index.vectorstore.as_retriever()
)
answer = qa.run("Explain RAG in simple words.")
Using 'map_reduce' chain type to summarize multiple documents.
LangChain
qa = RetrievalQA.from_chain_type(
    llm=OpenAI(),
    chain_type="map_reduce",
    retriever=index.vectorstore.as_retriever()
)
answer = qa.run("Summarize the documents.")
Sample Program

This program loads a text file, creates a vector index, builds a RAG chain, and answers a question about the document.

LangChain
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator

# Load a simple text document
loader = TextLoader('example.txt')
docs = loader.load()

# Create vector index from documents
index = VectorstoreIndexCreator().from_documents(docs)

# Build RAG chain with OpenAI LLM
qa = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=index.vectorstore.as_retriever()
)

# Ask a question
question = "What is the main topic of the document?"
answer = qa.run(question)
print(f"Question: {question}")
print(f"Answer: {answer}")
OutputSuccess
Important Notes

Make sure you have your OpenAI API key set in your environment variables.

Use small documents first to see how retrieval works.

LCEL lets you write chains in a simple way without complex code.

Summary

RAG chains combine search and AI to answer questions from documents.

LCEL helps build these chains easily with less code.

Try loading your own documents and ask questions to learn how it works.