Which statement best explains why RAG helps AI agents have better knowledge?
Think about how agents can get fresh information beyond their training.
RAG combines a search step with generation, letting agents find and use relevant external documents. This means agents can answer questions using current and detailed info, not just what they learned before.
Which model type is best suited to combine with retrieval in a RAG system to generate knowledgeable answers?
Consider which model can turn text input into meaningful answers.
RAG uses a language model to generate answers after retrieving documents. Models like GPT or BERT variants are suitable because they understand and produce text.
Which metric best measures how well a RAG agent uses retrieved documents to answer questions accurately?
Think about how to check if generated text matches expected answers.
Exact match score checks if the generated answer exactly matches the correct answer, which is a good way to measure knowledge accuracy in text generation tasks.
Given a RAG agent that retrieves documents but often gives wrong answers, what is the most likely cause?
Think about how bad input affects output quality.
If the retrieval step finds unrelated documents, the generator bases answers on wrong info, leading to errors.
Why can RAG agents answer questions about new topics not seen during their training?
Consider how retrieval changes the knowledge source for the agent.
RAG agents combine retrieval of up-to-date documents with generation, allowing them to answer questions about topics introduced after training.