0
0
NLPml~8 mins

Entity types (PERSON, ORG, LOC, DATE) in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Entity types (PERSON, ORG, LOC, DATE)
Which metric matters for Entity Types (PERSON, ORG, LOC, DATE) and WHY

For recognizing entity types like PERSON, ORG, LOC, and DATE, Precision and Recall are key. Precision tells us how many identified entities are correct. Recall tells us how many actual entities were found. We want both high because missing entities (low recall) or wrongly labeling text (low precision) hurts understanding.

Confusion Matrix Example for Entity Recognition
          Predicted
          P    O    L    D    None
    True P  40   2    1    0    7
         O   3   35   2    0    5
         L   1    2   38   1    8
         D   0    0    1   45    4
         None 5   4    6    3   377
    

This shows how many entities of each true type were predicted as each type or missed (None). For example, 40 PERSON entities were correctly found as PERSON (True Positive for PERSON). 7 PERSON entities were missed (predicted None).

Precision vs Recall Tradeoff with Examples

If we want to avoid wrongly tagging words as entities (high precision), we might miss some real entities (lower recall). For example, in legal documents, wrongly tagging a word as a person could cause confusion, so precision is important.

But in news summarization, missing a person or location (low recall) means losing important info, so recall is more important.

Balancing precision and recall depends on the task's goal.

Good vs Bad Metric Values for Entity Recognition
  • Good: Precision and Recall above 85% for all entity types means the model finds most entities correctly and rarely makes mistakes.
  • Bad: Precision below 60% means many false entities are predicted, confusing users.
  • Bad: Recall below 50% means many real entities are missed, losing key information.
Common Pitfalls in Metrics for Entity Recognition
  • Accuracy paradox: Since most words are not entities, accuracy can be very high even if the model never finds entities.
  • Data leakage: If test data contains entities seen in training, metrics may look better than real performance.
  • Overfitting: Very high precision but low recall can mean the model only recognizes entities it memorized.
Self Check

Your model has 98% accuracy but only 12% recall on PERSON entities. Is it good for production?

Answer: No. The high accuracy is misleading because most words are not entities. The very low recall means the model misses almost all PERSON entities, which is bad if you need to find people in text.

Key Result
Precision and recall are key to measure how well entity types like PERSON, ORG, LOC, and DATE are correctly found and labeled.