0
0
ML Pythonml~8 mins

Text feature basics (CountVectorizer, TF-IDF) in ML Python - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Text feature basics (CountVectorizer, TF-IDF)
Which metric matters for this concept and WHY

When working with text features like CountVectorizer and TF-IDF, the key metrics to evaluate are accuracy, precision, and recall of the model using these features. This is because these features transform text into numbers, and the quality of this transformation affects how well the model predicts.

For example, if you use these features in a spam detection model, precision tells you how many emails marked as spam really are spam, and recall tells you how many spam emails you caught. Both matter depending on your goal.

Confusion matrix or equivalent visualization (ASCII)
      Actual \ Predicted | Spam (Positive) | Not Spam (Negative)
      -------------------------------------------------------
      Spam (Positive)    |       TP = 80    |       FN = 20
      Not Spam (Negative)|       FP = 10    |       TN = 90
    

This matrix shows how many emails were correctly or incorrectly classified using text features.

Precision vs Recall tradeoff with concrete examples

Precision is important when you want to avoid false alarms. For example, in spam detection, high precision means fewer good emails are wrongly marked as spam.

Recall is important when you want to catch as many positives as possible. For example, in detecting harmful content, high recall means fewer harmful messages are missed.

CountVectorizer and TF-IDF affect this tradeoff by how well they represent important words. TF-IDF often helps by reducing the weight of common words, improving precision without losing recall.

What "good" vs "bad" metric values look like for this use case

Good metrics:

  • Precision and recall both above 0.8 (80%) for balanced performance.
  • F1 score (balance of precision and recall) above 0.8.
  • Consistent results on training and test data, showing features generalize well.

Bad metrics:

  • High precision but very low recall (e.g., precision 0.9, recall 0.3) means many positives missed.
  • High recall but very low precision (e.g., recall 0.9, precision 0.2) means many false alarms.
  • Very different metrics on training vs test data, indicating overfitting or poor feature representation.
Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)
  • Accuracy paradox: In imbalanced text data (e.g., 95% non-spam), accuracy can be high even if the model ignores spam. Precision and recall give better insight.
  • Data leakage: If test data words appear in training in a way that leaks labels, metrics look better but model fails in real use.
  • Overfitting: Very high training metrics but low test metrics suggest the text features capture noise, not true patterns.
  • Ignoring stop words: CountVectorizer without removing common words can inflate feature space and hurt model quality.
Self-check: Your model has 98% accuracy but 12% recall on spam. Is it good?

No, this model is not good for spam detection. The 98% accuracy is misleading because spam is rare. The 12% recall means it only finds 12 out of 100 spam emails, missing most spam. This would let many spam emails through, which is bad for users.

Key Result
Precision and recall are key to evaluate text features; high accuracy alone can be misleading in imbalanced text tasks.