In computer vision, the choice of metric depends on the task. For image classification, accuracy is common because it shows how often the model guesses right. For object detection, precision and recall matter more because we want to find all objects (high recall) but avoid false alarms (high precision). For segmentation, metrics like Intersection over Union (IoU) measure how well the predicted area matches the real object. Choosing the right metric helps us know if the model is truly good at its job.
0
0
What computer vision encompasses - Model Metrics & Evaluation
Metrics & Evaluation - What computer vision encompasses
Which metric matters for this concept and WHY
Confusion matrix or equivalent visualization (ASCII)
For image classification (e.g., cat vs dog):
Predicted
Cat Dog
Actual Cat 50 5
Dog 3 42
TP (Cat) = 50, FP (Cat) = 3, FN (Cat) = 5, TN (Cat) = 42
This matrix helps calculate precision and recall for each class.
Precision vs Recall tradeoff with concrete examples
Imagine a security camera detecting people entering a store:
- High precision: The camera rarely mistakes objects for people. Few false alarms. Good if you want to avoid bothering staff with false alerts.
- High recall: The camera catches almost every person, even if some false alarms happen. Good if missing a person is costly, like for safety monitoring.
Balancing precision and recall depends on what matters more: avoiding false alarms or missing real detections.
What "good" vs "bad" metric values look like for this use case
For a face recognition system:
- Good: Accuracy above 95%, precision and recall above 90%. The system correctly identifies faces with few mistakes.
- Bad: Accuracy around 60%, precision or recall below 50%. The system often misses faces or wrongly identifies people.
Good metrics mean the system is reliable and useful in real life.
Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)
- Accuracy paradox: In unbalanced data (e.g., 99% background, 1% object), a model guessing only background gets high accuracy but is useless.
- Data leakage: If test images are too similar to training images, metrics look better but model won't work well on new data.
- Overfitting: Very high training accuracy but low test accuracy means the model memorizes training images, not learning general patterns.
Self-check: Your model has 98% accuracy but 12% recall on detecting rare objects. Is it good?
No, it is not good. The high accuracy likely comes from many images without the rare object. The very low recall means the model misses most of the rare objects, which defeats the purpose of detection. You need to improve recall to catch more rare objects.
Key Result
In computer vision, choosing metrics like accuracy, precision, recall, or IoU depends on the task to properly evaluate model performance.