For Retrieval-Augmented Generation (RAG) agents, the key metric is retrieval accuracy. This measures how well the agent finds the right information from its knowledge base. Good retrieval accuracy means the agent uses correct facts to answer questions. Another important metric is response relevance, which checks if the agent's answers are useful and on-topic. These metrics matter because RAG agents combine retrieved knowledge with language generation to give informed answers.
Why RAG gives agents knowledge in Agentic AI - Why Metrics Matter
Retrieved Info vs. Correct Info
| Correct Info Present | Correct Info Absent
-------|----------------------|-------------------
Retrieved | TP | FP
Not Retrieved | FN | TN
TP = Correct info retrieved
FP = Wrong info retrieved
FN = Correct info missed
TN = Correctly ignored wrong info
This matrix helps measure retrieval precision and recall, which affect the agent's knowledge quality.
Precision means how many retrieved facts are actually correct. High precision means the agent rarely uses wrong info. For example, a medical assistant agent must have high precision to avoid giving harmful advice.
Recall means how many correct facts the agent finds out of all possible correct facts. High recall means the agent finds most relevant info. For example, a research assistant agent benefits from high recall to gather all useful data.
RAG agents balance precision and recall to give knowledgeable and trustworthy answers.
- Good: Retrieval precision and recall above 85%, leading to accurate and complete answers.
- Bad: Precision below 50% means many wrong facts used, causing misinformation.
- Bad: Recall below 40% means many relevant facts missed, leading to incomplete answers.
- Balanced high precision and recall ensure the agent's knowledge is reliable and helpful.
- Accuracy paradox: High overall accuracy can hide poor retrieval if most queries are easy.
- Data leakage: If the agent's knowledge base contains test answers, metrics will be unrealistically high.
- Overfitting: The agent may memorize facts but fail to generalize to new questions.
- Ignoring relevance: Retrieving many facts but not relevant ones inflates recall but hurts usefulness.
Your RAG agent has 98% retrieval accuracy but only 12% recall on key facts. Is it good for production? Why not?
Answer: No, because the agent misses most important facts (low recall). It may give precise but incomplete answers, which can mislead users. Improving recall is critical for trustworthy knowledge.