0
0
PyTorchml~8 mins

Custom transforms in PyTorch - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Custom transforms
Which metric matters for Custom Transforms and WHY

Custom transforms change data before training. The main goal is to improve model learning by making data clearer or more varied.

Metrics to watch are training loss and validation accuracy. Lower loss means the model fits data well. Higher accuracy means the model predicts better on new data.

If transforms help, validation accuracy should improve without overfitting (training accuracy too high but validation low).

Confusion Matrix Example

Suppose we classify images after applying custom transforms. Here is a confusion matrix:

      | Predicted Positive | Predicted Negative |
      |--------------------|--------------------|
      | True Positive (TP): 80 | False Negative (FN): 20 |
      | False Positive (FP): 10 | True Negative (TN): 90 |
    

Total samples = 80 + 20 + 10 + 90 = 200

From this, we calculate:

  • Precision = TP / (TP + FP) = 80 / (80 + 10) = 0.89
  • Recall = TP / (TP + FN) = 80 / (80 + 20) = 0.80
  • F1 Score = 2 * (Precision * Recall) / (Precision + Recall) ≈ 0.84
Precision vs Recall Tradeoff with Custom Transforms

Custom transforms can affect precision and recall differently.

For example, if transforms add noise, recall might drop because the model misses some true positives.

If transforms sharpen features, precision might improve because the model makes fewer false positive mistakes.

Choosing transforms depends on the task:

  • For medical images, high recall is key (catch all cases).
  • For spam detection, high precision is important (avoid marking good emails as spam).
Good vs Bad Metric Values for Custom Transforms

Good:

  • Validation accuracy improves or stays stable after adding transforms.
  • Training loss decreases steadily.
  • Precision and recall both improve or balance well.

Bad:

  • Validation accuracy drops significantly.
  • Training loss stays high or fluctuates.
  • Precision or recall drops sharply, indicating transforms confuse the model.
Common Pitfalls with Metrics and Custom Transforms
  • Accuracy paradox: Accuracy looks good but model fails on minority classes.
  • Data leakage: Transforms accidentally use test data info, inflating metrics.
  • Overfitting: Transforms make training data too easy, causing poor generalization.
  • Ignoring validation: Only training metrics improve, but validation worsens.
Self Check

Your model has 98% accuracy but 12% recall on fraud cases after applying custom transforms. Is it good?

Answer: No. The low recall means the model misses most fraud cases, which is critical. High accuracy is misleading because fraud is rare. You should improve recall, possibly by adjusting transforms or model.

Key Result
Custom transforms should improve validation accuracy and balance precision and recall to help the model learn better.