0
0
PyTorchml~8 mins

Label smoothing in PyTorch - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Label smoothing
Which metric matters for Label smoothing and WHY

Label smoothing helps the model avoid being too confident about its predictions. It softens the target labels, so the model learns better general patterns. The key metrics to watch are Cross-Entropy Loss and Accuracy. Cross-Entropy Loss shows how well the model predicts the smoothed labels, and Accuracy shows how often the model predicts the correct class. Because labels are softened, accuracy might be slightly lower but the model generalizes better.

Confusion matrix example with Label smoothing
    Actual \ Predicted | Class A | Class B | Class C
    ---------------------------------------------
    Class A           |   45    |   3     |   2
    Class B           |   4     |   43    |   3
    Class C           |   1     |   5     |   44

    Total samples = 150
    

From this matrix, we calculate metrics like precision and recall for each class. Label smoothing helps reduce overconfidence that can cause wrong predictions to be very confident.

Precision vs Recall tradeoff with Label smoothing

Label smoothing slightly lowers precision and recall because it softens the targets. This means the model is less sure about any single class, which can reduce false positives (improving precision) and false negatives (improving recall) in some cases.

For example, in a spam filter, label smoothing can help the model avoid marking too many good emails as spam (false positives), improving precision. But it might also miss some spam emails (false negatives), lowering recall a bit.

So, label smoothing balances precision and recall by preventing the model from being too confident, which helps in noisy or uncertain data.

What "good" vs "bad" metric values look like with Label smoothing

Good: Cross-Entropy Loss steadily decreases during training, and accuracy improves without sudden jumps. Precision and recall are balanced, showing the model is confident but not overconfident.

Bad: Very low loss but accuracy does not improve, or accuracy is high but the model fails on new data (overfitting). Precision or recall is very low, meaning the model is either too cautious or too confident on wrong classes.

Common pitfalls with Label smoothing metrics
  • Accuracy paradox: Accuracy might be lower with label smoothing but the model is actually better at generalizing.
  • Misinterpreting loss: Cross-Entropy Loss with label smoothing is different from normal loss, so comparing them directly can be misleading.
  • Overfitting signs: If loss keeps decreasing but validation accuracy drops, the model might be memorizing smoothed labels instead of learning patterns.
  • Ignoring class imbalance: Label smoothing does not fix class imbalance, so metrics like precision and recall per class are important.
Self-check question

Your model uses label smoothing and has 98% accuracy but only 12% recall on the fraud class. Is it good for production?

Answer: No, it is not good. Even with high accuracy, the very low recall means the model misses most fraud cases. For fraud detection, recall is critical because missing fraud is costly. Label smoothing helps generalize but does not fix low recall. You need to improve recall before production.

Key Result
Label smoothing improves model generalization by softening targets, balancing precision and recall, but requires careful interpretation of loss and accuracy.