0
0
Prompt Engineering / GenAIml~8 mins

Factual consistency checking in Prompt Engineering / GenAI - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Factual consistency checking
Which metric matters for Factual consistency checking and WHY

Factual consistency checking means making sure the AI's answers are true and match real facts. The key metrics here are Precision and Recall. Precision tells us how many of the AI's claims are actually correct. Recall tells us how many true facts the AI managed to include without missing them. We want both high because we want the AI to say only true facts (high precision) and not miss important facts (high recall).

Confusion matrix for Factual consistency checking
      | Predicted True | Predicted False |
      |----------------|-----------------|
      | True Positive  | False Negative  |
      | (Correct fact) | (Missed fact)   |
      |----------------|-----------------|
      | False Positive | True Negative   |
      | (Wrong fact)   | (Correctly no fact) |
    

Example numbers for 100 claims:

      TP = 70 (correct facts found)
      FP = 10 (wrong facts stated)
      FN = 15 (true facts missed)
      TN = 5  (correctly no false claims)
    
Precision vs Recall tradeoff with examples

If the AI is very careful and only states facts it is sure about, it will have high precision but might miss some facts, so lower recall. This means fewer wrong facts but some true facts are missing.

If the AI tries to include every possible fact, it will have high recall but might include wrong facts, so lower precision. This means more complete answers but some errors.

For example, a medical AI should have high precision to avoid wrong advice. A news summarizer might want higher recall to cover all important facts.

What good vs bad metric values look like

Good: Precision and recall both above 0.85 means most facts are correct and most true facts are included.

Bad: Precision below 0.5 means many wrong facts. Recall below 0.5 means many true facts missed. Either case means the AI is not reliable.

Common pitfalls in factual consistency metrics
  • Accuracy paradox: High overall accuracy can hide poor precision or recall if data is unbalanced.
  • Data leakage: If test facts appear in training, metrics look better but model is not truly consistent.
  • Overfitting: Model memorizes facts but fails on new facts, causing low recall.
  • Ignoring context: Some facts depend on context; metrics must consider this to avoid false errors.
Self-check question

Your model has 98% accuracy but only 12% recall on true facts. Is it good for production?

Answer: No. The model misses most true facts (low recall), so it is not reliable even if accuracy looks high. It needs improvement to find more true facts.

Key Result
Precision and recall are key to measure how many AI facts are correct and how many true facts are found.