0
0
Prompt Engineering / GenAIml~6 mins

RAG evaluation metrics in Prompt Engineering / GenAI - Full Explanation

Choose your learning style9 modes available
Introduction
When building systems that find and use information from large collections, it is important to know how well they work. RAG evaluation metrics help us measure how good these systems are at finding the right information and giving useful answers.
Explanation
Recall
Recall measures how many of the relevant pieces of information the system successfully finds. It focuses on completeness, showing if the system misses important facts. A high recall means the system finds most of what it should.
Recall tells us how much relevant information the system does not miss.
Precision
Precision measures how many of the pieces of information the system finds are actually relevant. It focuses on accuracy, showing if the system avoids giving wrong or unrelated facts. A high precision means the system’s answers are mostly correct.
Precision tells us how accurate the system’s found information is.
F1 Score
F1 Score combines recall and precision into one number by balancing them. It helps us understand the overall quality of the system’s information retrieval and answer generation. A high F1 score means the system is both accurate and complete.
F1 Score balances recall and precision to show overall performance.
Exact Match
Exact Match checks if the system’s answer exactly matches the correct answer. It is a strict measure that does not allow any differences. This metric is useful when precise answers are needed.
Exact Match measures if the answer is exactly right without any changes.
ROUGE and BLEU Scores
ROUGE and BLEU are metrics that compare the system’s generated text to reference answers by looking at overlapping words or phrases. They help measure how similar the generated answer is to the expected one, useful for evaluating text quality.
ROUGE and BLEU measure how closely generated text matches reference answers.
Real World Analogy

Imagine you are looking for specific books in a large library. Recall is like how many of the books you wanted you actually find on the shelves. Precision is like how many of the books you picked up are actually the ones you wanted, not random or wrong books. F1 Score is like a score that tells you how good you are at both finding and picking the right books.

Recall → Finding most of the books you wanted in the library
Precision → Picking only the books you wanted without mistakes
F1 Score → Overall score of how well you found and picked the right books
Exact Match → Picking a book that exactly matches the title you wanted
ROUGE and BLEU Scores → Comparing your book summary to the official summary to see how similar they are
Diagram
Diagram
┌─────────────┐       ┌─────────────┐       ┌─────────────┐
│  Recall     │──────▶│  F1 Score   │◀──────│  Precision  │
└─────────────┘       └─────────────┘       └─────────────┘
       │                                         │
       ▼                                         ▼
┌─────────────┐                           ┌─────────────┐
│Exact Match  │                           │ROUGE & BLEU │
└─────────────┘                           └─────────────┘
Diagram showing how Recall and Precision combine into F1 Score, with Exact Match and ROUGE/BLEU as additional evaluation metrics.
Key Facts
RecallMeasures the proportion of relevant information found by the system.
PrecisionMeasures the proportion of found information that is relevant.
F1 ScoreHarmonic mean of recall and precision showing overall accuracy and completeness.
Exact MatchChecks if the system’s answer exactly matches the correct answer.
ROUGE ScoreEvaluates overlap of words and phrases between generated and reference texts.
BLEU ScoreMeasures similarity of generated text to reference text based on matching n-grams.
Common Confusions
Thinking high recall means the system is always good.
Thinking high recall means the system is always good. High recall alone can mean the system finds many relevant items but may also include many irrelevant ones, so precision must also be considered.
Believing precision and recall measure the same thing.
Believing precision and recall measure the same thing. Precision measures accuracy of found items, while recall measures completeness; they focus on different aspects of performance.
Assuming Exact Match allows partial credit.
Assuming Exact Match allows partial credit. Exact Match requires the answer to be completely correct with no differences; partial matches do not count.
Summary
Recall and precision are key metrics that measure completeness and accuracy of information retrieval in RAG systems.
F1 Score balances recall and precision to give an overall performance measure.
Exact Match and ROUGE/BLEU scores help evaluate the quality and correctness of generated answers.