0
0
Computer Visionml~8 mins

Model evaluation best practices in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Model evaluation best practices
Which metric matters and WHY

In computer vision, the right metric depends on the task. For image classification, accuracy shows how many images were correctly labeled. For object detection, precision and recall matter because you want to find all objects (high recall) but avoid false alarms (high precision). For segmentation, metrics like IoU (Intersection over Union) measure how well predicted shapes match real shapes. Choosing the right metric helps you understand if your model solves the problem well.

Confusion matrix example
      Confusion Matrix for Image Classification (3 classes):

           Predicted
           Cat  Dog  Bird
    Actual
    Cat    50    2     3
    Dog     4   45     1
    Bird    2    3    40

    Total samples = 150

    From this:
    - Accuracy = (50+45+40)/150 = 135/150 = 0.9 (90%)
    - Precision for Cat = TP/(TP+FP) = 50/(50+4+2) = 50/56 ≈ 0.89
    - Recall for Cat = TP/(TP+FN) = 50/(50+2+3) = 50/55 ≈ 0.91
    
Precision vs Recall tradeoff

Imagine a face recognition system unlocking your phone. You want high precision so it doesn't unlock for strangers (few false positives). But if it misses your face sometimes, that is okay (lower recall). On the other hand, a security camera detecting intruders needs high recall to catch all threats, even if it sometimes raises false alarms (lower precision). Understanding this tradeoff helps pick the right balance for your use case.

Good vs Bad metric values

Good metrics depend on the task. For example, in medical image diagnosis, a recall above 0.95 is good because missing a disease is dangerous. In a photo tagging app, an accuracy above 0.85 is good for user satisfaction. Bad metrics are low precision or recall, like precision below 0.5 means many wrong detections, or recall below 0.5 means many misses. Always compare metrics to your problem's needs.

Common pitfalls in model evaluation
  • Accuracy paradox: High accuracy can be misleading if classes are imbalanced. For example, 95% accuracy on 95% background images means the model ignores rare objects.
  • Data leakage: Using test images in training inflates metrics falsely.
  • Overfitting: Very high training accuracy but low test accuracy means the model memorizes training images, not generalizing well.
  • Ignoring metric context: Using only accuracy when recall matters can hide poor performance.
Self-check question

Your object detection model has 98% accuracy but only 12% recall on detecting rare objects. Is it good for production? Why or why not?

Answer: No, it is not good. The low recall means the model misses most rare objects, which could be critical. High accuracy is misleading here because most images do not have the rare object, so the model is mostly correct by saying "no object". Improving recall is essential.

Key Result
Choosing the right metric like precision, recall, or IoU is key to understanding if a computer vision model truly solves the task well.