0
0
Computer Visionml~8 mins

Inception modules in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Inception modules
Which metric matters for Inception modules and WHY

Inception modules are used in image recognition tasks. The key metric to check is accuracy, which tells us how many images the model labels correctly. Because Inception modules help the model learn features at different scales, accuracy shows if this helps the model see details better.

Besides accuracy, top-5 accuracy is also important. It checks if the correct label is among the model's top 5 guesses, useful when many classes look similar.

Confusion matrix example
      Actual \ Predicted | Cat | Dog | Bird | Total
      ------------------------------------------
      Cat               | 45  | 3   | 2    | 50
      Dog               | 4   | 40  | 6    | 50
      Bird              | 1   | 5   | 44   | 50
      ------------------------------------------
      Total             | 50  | 48  | 52   | 150
    

This matrix shows how many images of each animal were correctly or wrongly predicted. For example, 45 cats were correctly predicted as cats (true positives for cat), 3 cats were wrongly predicted as dogs (false negatives for cat).

Precision vs Recall tradeoff with examples

Imagine the model detects cats in photos.

  • Precision means: Of all images predicted as cats, how many really are cats? High precision means few wrong cat guesses.
  • Recall means: Of all actual cat images, how many did the model find? High recall means the model misses few cats.

If the model has high precision but low recall, it rarely says "cat" unless very sure, but misses many cats. If it has high recall but low precision, it finds most cats but also wrongly calls other animals cats.

For Inception modules, balancing precision and recall is important to recognize many objects correctly without too many mistakes.

What good vs bad metric values look like for Inception modules
  • Good: Accuracy above 80% on a diverse image set, precision and recall both above 75%, and top-5 accuracy above 90%. This means the model is reliable and finds most objects correctly.
  • Bad: Accuracy below 50%, precision or recall below 50%, or top-5 accuracy near random chance (e.g., 20% for 5 classes). This means the model struggles to learn useful features.
Common pitfalls in metrics for Inception modules
  • Accuracy paradox: If the dataset is mostly one class, high accuracy can be misleading. The model might just guess the common class.
  • Data leakage: If test images are too similar to training images, metrics look better but model won't generalize.
  • Overfitting: Very high training accuracy but low test accuracy means the model memorizes training images but fails on new ones.
  • Ignoring top-5 accuracy: For many classes, top-1 accuracy alone may not show model usefulness.
Self-check question

Your Inception model has 98% accuracy but only 12% recall on a rare animal class. Is it good for production? Why or why not?

Answer: No, it is not good. The model misses most rare animal images (low recall), even if overall accuracy is high. This means it fails to find important rare cases, which can be critical depending on the application.

Key Result
Accuracy and balanced precision-recall are key to evaluate Inception modules' effectiveness in image recognition.