0
0
NLPml~8 mins

ROUGE evaluation metrics in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - ROUGE evaluation metrics
Which metric matters for ROUGE and WHY

ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It measures how well a computer summary matches a human summary by counting overlapping words or phrases. The main ROUGE metrics are ROUGE-N (overlapping n-grams), ROUGE-L (longest common subsequence), and ROUGE-S (skip-bigrams). ROUGE focuses on recall because it checks how much of the human summary is captured by the machine summary. This helps us know if the important parts are included.

Confusion matrix or equivalent visualization

ROUGE does not use a confusion matrix like classification. Instead, it counts overlapping units between summaries.

Human summary: "The cat sat on the mat"
Machine summary: "The cat is on the mat"

ROUGE-1 (unigrams) overlap: "The", "cat", "on", "the", "mat" = 5
Total human unigrams: 6

ROUGE-1 Recall = Overlap / Human unigrams = 5 / 6 ≈ 0.83
ROUGE-1 Precision = Overlap / Machine unigrams = 5 / 5 = 1.0
ROUGE-1 F1 = 2 * (Precision * Recall) / (Precision + Recall) ≈ 0.90
    
Precision vs Recall tradeoff with examples

ROUGE recall measures how much of the human summary is covered by the machine summary. High recall means the machine summary includes most important info.

ROUGE precision measures how much of the machine summary is relevant to the human summary. High precision means the machine summary is focused and not adding unrelated info.

Example: If a machine summary is very long and repeats many words, recall may be high but precision low. If it is very short, precision may be high but recall low.

For summarization, recall is often more important to ensure key info is not missed, but precision helps keep summaries concise.

What good vs bad ROUGE values look like

Good ROUGE scores are closer to 1.0, meaning strong overlap with human summary.

  • ROUGE-1 F1 above 0.5 is decent for many tasks.
  • ROUGE-L above 0.4 shows good sequence matching.
  • Scores below 0.3 usually mean poor summary quality.

However, very high ROUGE (near 1.0) may mean the machine summary is copying the human summary exactly, which is not always desired.

Common pitfalls with ROUGE metrics
  • Overfitting: Models may memorize training summaries, inflating ROUGE scores but not generalizing.
  • Ignoring meaning: ROUGE counts words but does not understand meaning, so paraphrased good summaries may score low.
  • Length bias: Longer summaries tend to have higher recall but lower precision.
  • Data leakage: Using test summaries in training can falsely boost ROUGE.
  • Single reference: Using only one human summary limits ROUGE's reliability; multiple references improve it.
Self-check question

Your summarization model has ROUGE-1 recall of 0.95 but precision of 0.3. Is it good for production? Why or why not?

Answer: This means the model includes almost all important words (high recall) but also adds many unrelated words (low precision). The summary may be too long or noisy. It is not ideal for production because users want concise, relevant summaries. You should improve precision while keeping recall high.

Key Result
ROUGE metrics measure overlap between machine and human summaries, focusing on recall to ensure key info is captured.