In simple terms, what does a RAG chain do in a language model system?
Think about how RAG combines searching and writing.
A RAG chain first finds useful information from documents, then uses that info to create better answers.
Given the following code snippet using LangChain, what will be the output?
from langchain.chains import RetrievalQA from langchain.llms import OpenAI retriever = DummyRetriever() qa = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=retriever) query = "What is AI?" result = qa.run(query) print(result)
Assume DummyRetriever returns a document defining AI correctly.
The retriever provides a document defining AI, and the LLM generates the answer based on it.
In a RAG chain using LCEL, which temperature setting is best for producing consistent factual answers?
Lower temperature means less creativity, more factual.
Setting temperature to 0 makes the model deterministic and factual, ideal for RAG chains.
You run a RAG chain on 100 questions. It answers 85 correctly. What is the accuracy?
Accuracy = (correct answers / total questions) * 100
85 correct out of 100 means accuracy is 85%.
Given this snippet, why does the RAG chain return empty answers?
retriever = SomeRetriever()
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(), retriever=retriever)
answer = qa_chain.run("")
print(answer)Check the input query string.
An empty query means no search happens, so no answer is generated.