When we turn images into numbers (pixels and channels), we want to check how well our model understands these numbers to recognize or classify images. The key metrics are accuracy for simple tasks, and precision, recall, and F1 score when classes are uneven or mistakes have different costs. These metrics tell us if the model correctly identifies images or confuses them.
0
0
Image as numerical data (pixels, channels) in Computer Vision - Model Metrics & Evaluation
Metrics & Evaluation - Image as numerical data (pixels, channels)
Which metric matters for this concept and WHY
Confusion matrix or equivalent visualization (ASCII)
Confusion Matrix Example for 3 classes (Cat, Dog, Bird):
Predicted
Cat Dog Bird
True Cat 50 2 3
Dog 4 45 1
Bird 2 3 40
Total samples = 150
Here, diagonal numbers (50, 45, 40) are correct predictions.
Off-diagonal numbers are mistakes.
Precision vs Recall tradeoff with concrete examples
Imagine a model that finds cats in photos:
- High precision: When it says "cat," it is almost always right. Good if you want to avoid false alarms, like tagging a dog as a cat.
- High recall: It finds almost all cats, even if some mistakes happen. Good if missing a cat is worse, like in wildlife monitoring.
Choosing precision or recall depends on what mistakes cost more in your task.
What "good" vs "bad" metric values look like for this use case
For image data tasks:
- Good: Accuracy above 90%, precision and recall above 85% for each class means the model understands pixel data well.
- Bad: Accuracy below 60%, or very low precision/recall (below 50%) means the model struggles to interpret pixel values correctly.
Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)
- Accuracy paradox: If one class is very common, a model guessing that class always can have high accuracy but poor real performance.
- Data leakage: If test images are too similar to training images, metrics look better but model won't generalize.
- Overfitting: Very high training accuracy but low test accuracy means the model memorizes pixels instead of learning patterns.
Self-check question
Your image classifier has 98% accuracy but only 12% recall on the rare "bird" class. Is it good for production? Why not?
Answer: No, because it misses most birds. High accuracy is misleading if the model ignores rare classes. You need better recall to catch birds reliably.
Key Result
Accuracy alone can be misleading; precision and recall reveal how well the model interprets pixel data for each class.