Complete the code to load documents using LangChain's loader.
from langchain.document_loaders import TextLoader docs = TextLoader('[1]').load()
The TextLoader loads text documents from a file. Here, 'data.txt' is the correct file to load documents.
Complete the code to split documents into chunks for better retrieval.
from langchain.text_splitter import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter(chunk_size=[1], chunk_overlap=20) docs = splitter.split_documents(docs)
Chunk size of 1000 characters is a common choice to balance chunk size and overlap for retrieval.
Fix the error in the code to create a vector store from documents.
from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(docs, [1])
The FAISS vector store requires the embeddings object to convert documents into vectors.
Fill both blanks to create a RetrievalQA chain with the vector store retriever.
from langchain.chains import RetrievalQA retriever = vectorstore.[1]() qa_chain = RetrievalQA.from_chain_type(llm=llm, retriever=[2])
The method to get a retriever from the vectorstore is 'as_retriever'. This retriever is then passed to the RetrievalQA chain.
Fill all three blanks to run a query and print the answer from the RetrievalQA chain.
query = '[1]' result = qa_chain.[2](query) print(result['[3]'])
The query is a question string. The method to run the chain is 'run'. The answer is accessed from the 'answer' key in the result dictionary.