0
0
NLPml~8 mins

Document-term matrix in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Document-term matrix
Which metric matters for Document-term matrix and WHY

A Document-term matrix (DTM) itself is a way to represent text data as numbers. It shows how often each word appears in each document. The quality of a DTM is often judged by how well it helps a model learn or find patterns.

Metrics like sparsity (how many zeros it has) matter because a very sparse matrix can slow down learning. Also, when using the DTM for tasks like classification, metrics such as accuracy, precision, and recall on the model built from the DTM become important.

In short, the DTM itself is a data format, so we look at metrics that tell us if it represents the text well and helps models perform better.

Confusion matrix or equivalent visualization

Since DTM is a data representation, it does not have a confusion matrix by itself. But when used in classification, a confusion matrix looks like this:

      | Predicted Positive | Predicted Negative |
      |--------------------|--------------------|
      | True Positive (TP)  | False Negative (FN) |
      | False Positive (FP) | True Negative (TN)  |
    

This matrix helps us calculate precision and recall for models using the DTM.

Precision vs Recall tradeoff with concrete examples

Imagine using a DTM to detect spam emails:

  • High precision means most emails marked as spam really are spam. This avoids annoying users by not marking good emails as spam.
  • High recall means catching most spam emails, even if some good emails get marked wrongly.

Depending on what matters more, you might tune your model differently. The DTM quality affects how well the model can balance this tradeoff.

What "good" vs "bad" metric values look like for this use case

Good DTM characteristics:

  • Low sparsity (not too many zeros) so models learn better.
  • Words chosen capture important meaning (not just common words).

Good model metrics using DTM:

  • Accuracy above 80% for simple tasks.
  • Precision and recall balanced above 70% for spam detection.

Bad signs:

  • Very sparse DTM with many irrelevant words.
  • Model accuracy near random guessing (e.g., 50% for two classes).
  • Precision very high but recall very low, or vice versa, without reason.
Metrics pitfalls
  • Accuracy paradox: High accuracy can happen if one class dominates, but the model ignores the smaller class.
  • Data leakage: If the DTM includes words that reveal the answer directly, the model looks better but won't work in real life.
  • Overfitting: A very large DTM with many rare words can cause the model to memorize training data but fail on new data.
  • Ignoring sparsity: Too many zero entries slow down training and may reduce model quality.
Self-check question

Your model built on a Document-term matrix has 98% accuracy but only 12% recall on the spam class. Is it good for production? Why or why not?

Answer: No, it is not good. The model misses most spam emails (low recall), even though overall accuracy is high. This means it mostly predicts emails as not spam, which is not useful for catching spam.

Key Result
Document-term matrix quality affects model metrics like precision and recall, which must be balanced for good text classification.