0
0
NLPml~8 mins

Word similarity and analogies in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Word similarity and analogies
Which metric matters for Word similarity and analogies and WHY

For word similarity and analogies, we want to measure how close the model's word pairs or analogies are to human judgment or known relationships. Common metrics include cosine similarity for word pairs and accuracy for analogy tasks. Cosine similarity measures how similar two word vectors are by looking at the angle between them, which tells us if words are related in meaning. For analogies, accuracy shows how often the model correctly predicts the missing word in "A is to B as C is to ?" problems. These metrics matter because they directly reflect how well the model understands word meanings and relationships, which is the goal of these tasks.

Confusion matrix or equivalent visualization

For analogy tasks, we can use a simple accuracy count since it is a classification problem:

    Total analogies tested: 1000
    Correct predictions (True Positives, TP): 850
    Incorrect predictions (False Positives + False Negatives): 150
    
    Accuracy = TP / Total = 850 / 1000 = 0.85 (85%)
    

For word similarity, we often compare model scores to human scores using correlation (like Pearson or Spearman), not a confusion matrix. For example:

    Human similarity scores: [0.9, 0.7, 0.2, 0.4]
    Model cosine similarities: [0.88, 0.65, 0.25, 0.45]
    Correlation coefficient (Pearson) = 0.95 (high agreement)
    
Precision vs Recall tradeoff with concrete examples

In word similarity and analogies, precision and recall are less common metrics because these tasks are not typical binary classification. However, if we treat analogy prediction as classification, we can think about tradeoffs:

  • High precision: When the model predicts an analogy, it is usually correct. This means fewer wrong answers but might miss some correct analogies (low recall).
  • High recall: The model tries to predict many analogies, catching most correct ones but also making more mistakes (lower precision).

Example: A language learning app uses analogy tasks to test vocabulary. High precision means the app rarely gives wrong answers, so learners trust it. High recall means the app covers many analogy types but might confuse learners with some wrong answers. Balancing these depends on the app's goal.

What "good" vs "bad" metric values look like for this use case

Word similarity:

  • Good: Correlation with human scores above 0.8 means the model's similarity matches human intuition well.
  • Bad: Correlation below 0.5 means the model's similarity scores do not align well with human judgments.

Analogies:

  • Good: Accuracy above 80% means the model correctly solves most analogy questions.
  • Bad: Accuracy below 50% means the model guesses poorly and does not understand word relationships well.
Metrics pitfalls
  • Ignoring context: Word similarity can change with context, but static metrics may miss this, leading to misleading scores.
  • Overfitting to test sets: Models tuned too much on standard analogy datasets may perform well there but poorly in real use.
  • Accuracy paradox: High accuracy on analogy tasks with many easy questions may hide poor performance on harder cases.
  • Data leakage: If analogy test data overlaps with training data, metrics will be unrealistically high.
Self-check question

Your word analogy model has 98% accuracy on a small, easy test set but only 60% on a larger, diverse set. Is it good for production? Why or why not?

Answer: No, it is not good for production. The high accuracy on the small set likely means the model learned those specific examples (overfitting). The lower accuracy on the diverse set shows it struggles with real-world cases. Production models need consistent performance on varied data.

Key Result
Cosine similarity and analogy accuracy are key metrics showing how well models capture word meaning and relationships.