0
0
Computer Visionml~8 mins

Why pre-trained models save time in Computer Vision - Why Metrics Matter

Choose your learning style9 modes available
Metrics & Evaluation - Why pre-trained models save time
Which metric matters for this concept and WHY

When using pre-trained models, the key metric to watch is training time and accuracy. Pre-trained models save time because they start with knowledge from previous training. This means they need fewer steps to learn your task well. Watching accuracy ensures the model still performs well after fine-tuning.

Confusion matrix or equivalent visualization (ASCII)
    Example confusion matrix after fine-tuning a pre-trained model:

          Predicted
          +-----+-----+
          | Pos | Neg |
    +-----+-----+-----+
    | Pos |  85 |  10 |
    | Neg |  15 |  90 |
    +-----+-----+-----+

    Total samples = 200
    TP = 85, FP = 15, FN = 10, TN = 90

    This shows good accuracy and balanced errors after less training time.
    
Precision vs Recall tradeoff with concrete examples

Pre-trained models help balance precision and recall faster. For example:

  • High precision: The model correctly identifies objects without many false alarms. Useful in quality control where mistakes are costly.
  • High recall: The model finds most objects, even if some are wrong. Useful in safety checks where missing an object is bad.

Pre-trained models start with good features, so you can quickly adjust this balance with less data and time.

What "good" vs "bad" metric values look like for this use case

Good: Accuracy above 85% after a few training epochs, with balanced precision and recall around 80% or higher. Training time is short because the model already knows useful features.

Bad: Accuracy below 70%, or very low recall or precision, meaning the model is not learning well. Training takes a long time because the model starts from scratch.

Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)
  • Accuracy paradox: High accuracy can be misleading if data is unbalanced. For example, if most images are background, the model might guess background and get high accuracy but fail to detect objects.
  • Data leakage: Using test images in training can make metrics look better than reality.
  • Overfitting: Very high training accuracy but low test accuracy means the model memorized training data and won't generalize well.
Your model has 98% accuracy but 12% recall on fraud. Is it good?

No, it is not good for fraud detection. The high accuracy likely comes from many normal cases. The very low recall means the model misses most fraud cases, which is dangerous. For fraud, recall is critical because missing fraud is costly.

Key Result
Pre-trained models reduce training time while maintaining good accuracy and balanced precision-recall.