0
0
NLPml~8 mins

Evaluating generated text (BLEU, ROUGE) in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Evaluating generated text (BLEU, ROUGE)
Which metric matters for evaluating generated text and WHY

When we want to check how good a computer-generated text is, we use special scores called BLEU and ROUGE. These scores compare the generated text to a set of good example texts (called references). BLEU looks at how many small word groups (like pairs or triples) match exactly. ROUGE checks how many words or sentences overlap, focusing on recall (how much of the reference is covered). We use BLEU when we want to see if the generated text is precise and similar to the reference. ROUGE is useful when we want to make sure the generated text covers the important parts of the reference, especially for summaries.

Confusion matrix or equivalent visualization

For text generation, we don't use a confusion matrix like in classification. Instead, we look at n-gram overlaps. Here is a simple example of how BLEU counts matching word groups:

Reference:  "The cat sat on the mat"
Generated: "The cat is on the mat"

Unigrams (single words) match: The, cat, on, the, mat (5 matches)
Bigrams (pairs) match: "The cat", "on the", "the mat" (3 matches)

BLEU score combines these matches to give a score between 0 and 1.
    

ROUGE looks at recall of overlapping words or sequences, for example:

Reference summary: "The cat sat on the mat quietly."
Generated summary: "Cat sat quietly on mat."

ROUGE measures how many words or phrases from the reference appear in the generated text.
    
Precision vs Recall tradeoff with concrete examples

BLEU focuses more on precision: it checks how much of the generated text matches the reference exactly. If the generated text has many extra or wrong words, BLEU score goes down.

ROUGE focuses more on recall: it checks how much of the reference text is covered by the generated text. If the generated text misses important parts, ROUGE score goes down.

Example:

  • If a summary includes only a few words but all are correct, BLEU might be high but ROUGE low (low recall).
  • If a summary covers many important points but adds some extra words, ROUGE might be high but BLEU lower (lower precision).

Choosing which metric to focus on depends on what matters more: exactness (BLEU) or coverage (ROUGE).

What "good" vs "bad" metric values look like for this use case

Good BLEU or ROUGE scores are closer to 1.0, meaning the generated text is very similar to the reference.

Good example: BLEU = 0.7, ROUGE = 0.75 means the generated text matches well in both exact words and coverage.

Bad example: BLEU = 0.2, ROUGE = 0.3 means the generated text is quite different or missing important parts.

However, scores depend on the task. For creative writing, lower scores might be okay. For machine translation or summaries, higher scores are expected.

Metrics pitfalls
  • Overfitting to references: Models might copy reference text exactly to get high scores but produce less natural text.
  • Ignoring meaning: BLEU and ROUGE check word overlap, not if the meaning is correct or fluent.
  • Short text bias: Very short generated texts can get high precision but miss important content.
  • Multiple valid outputs: There can be many good ways to say the same thing, but BLEU/ROUGE only compare to given references.
  • Data leakage: Using test references during training inflates scores unfairly.
Self-check question

Your text generation model has a BLEU score of 0.85 but a ROUGE score of 0.40. Is this good for a summary task? Why or why not?

Answer: This means the generated text matches the reference words very precisely (high BLEU) but covers only a small part of the reference (low ROUGE). For summaries, coverage is important, so this model might miss key points. It is not good enough for summary tasks because it lacks recall.

Key Result
BLEU measures precision of word overlap; ROUGE measures recall of reference coverage; both are needed to evaluate generated text quality.