0
0
NLPml~8 mins

Custom QA model fine-tuning in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Custom QA model fine-tuning
Which metric matters for Custom QA model fine-tuning and WHY

For a custom question answering (QA) model, the key metrics are Exact Match (EM) and F1 score. Exact Match checks if the model's answer exactly matches the correct answer, which shows how precise the model is. F1 score measures the overlap between the predicted and true answers, balancing precision and recall. These metrics matter because QA answers can be short phrases or sentences, so partial matches are important to capture. High EM means the model is very accurate, and high F1 means it understands the answer well even if wording differs.

Confusion matrix or equivalent visualization

QA models don't use a classic confusion matrix like classification. Instead, we compare predicted answers to true answers using token-level overlap.

True Answer: "Paris is the capital of France"
Predicted: "The capital of France is Paris"

Tokens matched: Paris, capital, France
Tokens in true answer: 6
Tokens in predicted answer: 7

F1 = 2 * (Precision * Recall) / (Precision + Recall)
Precision = matched tokens / predicted tokens = 3/7 ≈ 0.429
Recall = matched tokens / true tokens = 3/6 = 0.5
F1 = 2 * 0.429 * 0.5 / (0.429 + 0.5) ≈ 0.462
Exact Match = 0 (answers not exactly the same)
    
Precision vs Recall tradeoff with concrete examples

In QA, precision means how much of the predicted answer is correct, and recall means how much of the true answer the model found.

High precision, low recall: The model gives short answers that are always correct but miss some details. For example, answering "Paris" when the full answer is "Paris is the capital of France." This is safe but incomplete.

High recall, low precision: The model gives long answers that include the correct info but also extra wrong words. For example, "Paris is the capital of France and a big city in Europe." This covers the answer but adds noise.

Good QA models balance precision and recall to give answers that are correct and complete.

What "good" vs "bad" metric values look like for this use case

Good QA model:

  • Exact Match (EM) above 70% means the model often gets the answer exactly right.
  • F1 score above 80% means the model captures most of the correct answer even if wording differs.

Bad QA model:

  • EM below 40% means the model rarely matches answers exactly.
  • F1 below 50% means the model misses many important words or adds wrong info.
Metrics pitfalls
  • Exact Match is too strict: It ignores partially correct answers that are still useful.
  • Overfitting: Very high EM and F1 on training data but low on new questions means the model memorized answers, not learned to generalize.
  • Data leakage: If test questions appear in training, metrics will be falsely high.
  • Ignoring answer variability: Some questions have multiple correct answers; metrics must consider synonyms or paraphrases.
Self-check question

Your custom QA model has 60% Exact Match but 85% F1 score on the test set. Is it good for production? Why or why not?

Answer: This means the model captures most of the answer well even if wording differs (high F1), which is great. But the lower EM shows it doesn't always get the exact answer right. Depending on your use case, this might be acceptable if partial matches are useful. However, if exact wording matters most, you may want to improve the model to raise EM before production.

Key Result
Exact Match and F1 score are key metrics; high EM shows precise answers, high F1 shows good partial matching.