0
0
Prompt Engineering / GenAIml~8 mins

RAG evaluation metrics in Prompt Engineering / GenAI - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - RAG evaluation metrics
Which metric matters for RAG evaluation and WHY

RAG (Retrieval-Augmented Generation) models combine searching for relevant information and generating answers. To check how well they work, we need metrics that measure both parts:

  • Retrieval quality: How good is the search? We use Recall@k to see if the right documents are found in the top k results.
  • Generation quality: How good is the answer? We use BLEU, ROUGE, or METEOR to compare the generated answer to a correct answer.
  • End-to-end quality: How useful is the final answer? We use Exact Match (EM) and F1 score on the answer text to check correctness and overlap.

These metrics together tell us if the model finds the right info and uses it well to answer.

Confusion matrix or equivalent visualization

For RAG, a confusion matrix is less direct because it's about retrieval and generation. But for retrieval, we can think like this:

    Retrieved Relevant Docs (TP) | Retrieved Irrelevant Docs (FP)
    ----------------------------|-----------------------------
    Not Retrieved Relevant Docs (FN) | Not Retrieved Irrelevant Docs (TN)
    

Example for retrieval of 5 docs where 3 are relevant:

    TP = 2 (relevant docs found)
    FP = 1 (irrelevant doc found)
    FN = 1 (relevant doc missed)
    TN = N/A (irrelevant docs not retrieved)
    

Recall@k = TP / (TP + FN) = 2 / (2 + 1) = 0.67

Precision vs Recall tradeoff with examples

In retrieval, Recall means finding as many relevant documents as possible. Precision means most retrieved documents are relevant.

For RAG:

  • If recall is low, the model misses important info, so answers may be wrong or incomplete.
  • If precision is low, the model uses many irrelevant documents, confusing the answer.

Example: A medical RAG system should have high recall to not miss any important facts, even if some irrelevant info is included.

Example: A legal RAG system might prefer higher precision to avoid wrong citations, even if some relevant docs are missed.

What "good" vs "bad" metric values look like for RAG
  • Recall@5: Good > 0.8 means most relevant docs found in top 5; Bad < 0.5 means many relevant docs missed.
  • BLEU/ROUGE: Good > 0.5 means generated answers closely match references; Bad < 0.2 means poor answer quality.
  • Exact Match (EM): Good > 0.7 means many answers exactly match; Bad < 0.4 means many answers are wrong or incomplete.
  • F1 score: Good > 0.7 means good overlap of answer words; Bad < 0.4 means poor overlap.

Good values depend on task difficulty and data quality but these ranges help spot strong vs weak models.

Common pitfalls in RAG evaluation metrics
  • Ignoring retrieval quality: Only checking generated text can hide poor retrieval, leading to wrong answers.
  • Overfitting to reference answers: Metrics like BLEU may penalize valid but different answers.
  • Data leakage: If retrieval index contains test answers, metrics will be unrealistically high.
  • Confusing precision and recall: For retrieval, recall is often more important to find all relevant info.
  • Using accuracy alone: Accuracy on answer correctness can be misleading if dataset is imbalanced.
Self-check question

Your RAG model has 90% Exact Match but only 40% Recall@5 on retrieval. Is it good for production? Why or why not?

Answer: No, it is not good. The model's retrieval misses many relevant documents (low recall), so it may not have enough info to answer well in many cases. High Exact Match might be from easy questions or memorized answers, but poor retrieval limits real usefulness.

Key Result
RAG evaluation needs both retrieval recall and generation quality metrics to ensure relevant info is found and answers are accurate.