0
0
NLPml~8 mins

Model optimization (distillation, quantization) in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Model optimization (distillation, quantization)
Which metric matters for Model optimization (distillation, quantization) and WHY

When optimizing models by distillation or quantization, the key metrics to watch are accuracy or task-specific performance (like F1 score for NLP tasks). This is because these methods reduce model size or speed up inference but can cause small drops in prediction quality. We want to keep the model as accurate as possible while making it smaller or faster.

Latency and model size are also important metrics here, but they are not about prediction quality. They measure how fast and small the model is after optimization.

Confusion matrix example after distillation
    Original model confusion matrix:
      TP=90  FP=10
      FN=5   TN=95

    Distilled model confusion matrix:
      TP=88  FP=12
      FN=7   TN=93

    Total samples = 90+10+5+95 = 200
    

Notice the small drop in true positives and increase in false negatives after distillation, showing a slight accuracy loss.

Precision vs Recall tradeoff in model optimization

When we optimize a model, sometimes precision or recall can drop. For example, quantization might make the model less sensitive, lowering recall (missing some true cases). Distillation might simplify the model, affecting precision (more false alarms).

Example: For a spam detector, if recall drops, some spam emails get through. If precision drops, good emails get marked as spam. We must balance these based on what matters more.

Good vs Bad metric values after optimization

Good: Accuracy or F1 score drops less than 1-2% compared to original, with significant size or speed gains.

Bad: Accuracy or F1 score drops more than 5%, causing poor predictions even if the model is smaller or faster.

Common pitfalls in metrics for model optimization
  • Ignoring accuracy drop: Focusing only on size or speed and missing big accuracy loss.
  • Not testing on real data: Optimized model might perform worse on real-world inputs.
  • Overfitting to test set: Tuning optimization to a small test set can give misleading metrics.
  • Confusing latency with accuracy: Faster model is good, but not if predictions become unreliable.
Self-check question

Your original NLP model has 95% accuracy. After quantization, accuracy drops to 92%, but inference speed doubles and model size halves. Is this good?

Answer: It depends on your use case. A 3% accuracy drop might be acceptable if speed and size improvements are critical, like on mobile devices. But if accuracy is crucial, this drop might be too large. Always balance metrics based on needs.

Key Result
Model optimization aims to keep accuracy or F1 score high while reducing size and latency, balancing tradeoffs carefully.