0
0
Computer Visionml~8 mins

Why computer vision teaches machines to see - Why Metrics Matter

Choose your learning style9 modes available
Metrics & Evaluation - Why computer vision teaches machines to see
Which metric matters for this concept and WHY

In computer vision, common tasks include recognizing objects, detecting faces, or segmenting images. The key metrics to evaluate these tasks are accuracy, precision, recall, and F1 score. These metrics tell us how well the machine "sees" and understands images. For example, precision shows how many detected objects are actually correct, while recall shows how many real objects the machine found. We use these metrics because they help us measure if the machine is making good decisions when interpreting images.

Confusion matrix or equivalent visualization (ASCII)
    Confusion Matrix Example for Object Detection:

          Predicted
          Yes    No
    Actual
    Yes   TP=80  FN=20
    No    FP=10  TN=90

    Total samples = 80 + 20 + 10 + 90 = 200

    Precision = TP / (TP + FP) = 80 / (80 + 10) = 0.89
    Recall = TP / (TP + FN) = 80 / (80 + 20) = 0.80
    F1 Score = 2 * (0.89 * 0.80) / (0.89 + 0.80) ≈ 0.84
    
Precision vs Recall tradeoff with concrete examples

Imagine a self-driving car that uses computer vision to detect pedestrians. Here, high recall is very important because missing a pedestrian (false negative) can cause accidents. So, the system should find almost all pedestrians, even if it sometimes mistakes other objects for people (lower precision).

On the other hand, a photo app that tags friends in pictures needs high precision. It should avoid tagging the wrong person (false positive) to keep users happy, even if it misses some friends (lower recall).

Balancing precision and recall depends on the goal. Computer vision models must be tuned to fit the real-life needs of their task.

What "good" vs "bad" metric values look like for this use case

Good metrics: Precision and recall above 0.85 usually mean the model sees well. For example, precision = 0.90 and recall = 0.88 means the model finds most objects and is mostly correct.

Bad metrics: Precision or recall below 0.50 means the model struggles. For example, precision = 0.40 means many false alarms, and recall = 0.45 means many objects are missed.

Accuracy alone can be misleading if the dataset is unbalanced (e.g., many images without objects). So, precision and recall give a clearer picture.

Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)
  • Accuracy paradox: If most images have no objects, a model that always says "no object" can have high accuracy but is useless.
  • Data leakage: If test images are too similar to training images, metrics look great but the model fails on new images.
  • Overfitting: Very high training accuracy but low test accuracy means the model memorizes images instead of learning to see.
Self-check: Your model has 98% accuracy but 12% recall on detecting stop signs. Is it good?

No, it is not good. The model finds only 12% of actual stop signs, which is very low recall. Even though accuracy is high, the model misses most stop signs, which is dangerous for real driving. High recall is critical here to avoid accidents.

Key Result
Precision and recall are key to measure how well computer vision models detect and recognize objects accurately and completely.