0
0
LangChainframework~10 mins

Basic RAG chain with LCEL in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Basic RAG chain with LCEL
User Query Input
Retrieve Relevant Docs
Pass Docs + Query to LCEL
LCEL Generates Answer
Return Answer to User
The flow starts with a user query, retrieves documents, then uses LCEL to generate an answer based on those docs and the query.
Execution Sample
LangChain
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.document_loaders import SimpleDirectoryLoader
from langchain.vectorstores import FAISS

# Setup retriever and LCEL chain
retriever = FAISS.load_local('faiss_index', OpenAI().embedding)
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(), retriever=retriever)

# Run query
answer = qa_chain.run('What is LangChain?')
This code sets up a retrieval-based QA chain using LCEL and runs a query to get an answer.
Execution Table
StepActionInputOutputNotes
1Receive user query'What is LangChain?'Query storedStart with user input
2Retrieve documentsQuery + FAISS indexTop relevant docsRetriever finds related info
3Pass to LCELDocs + QueryPrompt for LCELPrepare input for language model
4LCEL generates answerPromptAnswer textLanguage model creates response
5Return answerAnswer textDisplayed to userFinal output
6EndN/AProcess completeNo more steps
💡 Process ends after answer is returned to user.
Variable Tracker
VariableStartAfter Step 1After Step 2After Step 3After Step 4Final
queryNone'What is LangChain?''What is LangChain?''What is LangChain?''What is LangChain?''What is LangChain?'
retrieved_docsNoneNone[Doc1, Doc2, ...][Doc1, Doc2, ...][Doc1, Doc2, ...][Doc1, Doc2, ...]
lc_el_promptNoneNoneNonePrompt with docs + queryPrompt with docs + queryPrompt with docs + query
answerNoneNoneNoneNone'LangChain is a framework...''LangChain is a framework...'
Key Moments - 3 Insights
Why do we need to retrieve documents before asking LCEL?
The execution_table row 2 shows retrieval of relevant documents. LCEL uses these docs to give accurate answers instead of guessing.
What exactly does LCEL receive as input?
Row 3 in execution_table shows LCEL gets a prompt combining the user query and retrieved documents to generate a precise answer.
When does the process stop?
Row 6 marks the end after the answer is returned, so no further steps happen beyond that.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the output after Step 2?
ATop relevant documents
BUser query stored
CAnswer text
DProcess complete
💡 Hint
Check the 'Output' column for Step 2 in the execution_table.
At which step does LCEL generate the answer?
AStep 1
BStep 3
CStep 4
DStep 5
💡 Hint
Look for 'LCEL generates answer' in the 'Action' column.
If the retriever returns no documents, what changes in the execution_table?
AStep 2 output would be empty or None
BStep 4 would generate answer normally
CStep 1 would change
DProcess would skip Step 3
💡 Hint
Focus on the 'retrieved_docs' variable in variable_tracker after Step 2.
Concept Snapshot
Basic RAG chain with LCEL:
1. Input user query.
2. Retrieve relevant documents using vector search.
3. Combine docs + query into prompt.
4. LCEL (language model) generates answer.
5. Return answer to user.
This ensures answers are based on real info, not guesswork.
Full Transcript
This visual execution trace shows how a Basic RAG chain with LCEL works. First, the user inputs a query. Then the system retrieves relevant documents from a vector store. These documents plus the query form a prompt for the LCEL language model. LCEL generates an answer based on this prompt. Finally, the answer is returned to the user. Variables like 'query', 'retrieved_docs', 'lc_el_prompt', and 'answer' change step-by-step. Key moments clarify why retrieval is needed before LCEL and when the process ends. The quiz tests understanding of each step's output and flow.