Complete the code to load documents for retrieval in RAG.
from langchain.document_loaders import TextLoader loader = TextLoader('[1]') docs = loader.load()
The documents to retrieve knowledge from must be loaded from a text file, so 'data.txt' is correct.
Complete the code to create a retriever from documents for RAG.
from langchain.vectorstores import FAISS retriever = FAISS.from_documents(docs, [1]).as_retriever()
The retriever needs an embedding model to convert documents into vectors for similarity search.
Fix the error in the RAG agent creation by completing the missing argument.
from langchain.chains import RetrievalQA rag_agent = RetrievalQA.from_chain_type(llm=llm, retriever=[1], return_source_documents=True)
The RAG agent requires a retriever to fetch relevant documents for answering queries.
Fill both blanks to create a dictionary comprehension that filters documents by length and stores their text.
filtered_docs = {doc.metadata['title']: doc.page_content for doc in docs if len(doc.page_content) [1] 100 and doc.metadata['source'] [2] 'trusted_source'}We filter documents shorter than 100 characters and from the trusted source.
Fill all three blanks to build a RAG pipeline that embeds, retrieves, and answers questions.
embedding = [1]() retriever = FAISS.from_documents(docs, embedding).as_retriever() rag_agent = RetrievalQA.from_chain_type(llm=[2](), retriever=retriever, return_source_documents=True) answer = rag_agent.run([3])
The embedding model is OpenAIEmbeddings, the language model is OpenAI, and the question is a string.