0
0
Computer Visionml~8 mins

Feature extraction approach in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Feature extraction approach
Which metric matters for Feature Extraction and WHY

Feature extraction helps turn images into useful numbers for a model. To check if these features are good, we look at how well the model performs using them. Common metrics include accuracy for simple tasks, but more detailed metrics like precision, recall, and F1 score matter when classes are uneven or errors have different costs.

Why? Because good features help the model tell apart classes clearly. If features are poor, even the best model will struggle, so metrics show if the features capture important info.

Confusion Matrix Example

Imagine a model using extracted features to detect cats vs dogs. Here is a confusion matrix from 100 images:

      | Predicted Cat | Predicted Dog |
      |--------------|---------------|
      | True Cat: 40 | False Dog: 5  |
      | False Cat: 10| True Dog: 45  |
    

From this:

  • True Positives (TP) = 40 (correct cat detections)
  • False Positives (FP) = 10 (dogs wrongly called cats)
  • True Negatives (TN) = 45 (correct dog detections)
  • False Negatives (FN) = 5 (cats missed)
Precision vs Recall Tradeoff

Using features, the model can be tuned to catch more cats (high recall) or be more sure when it says "cat" (high precision).

High Precision: Few wrong cat labels, but might miss some cats (lower recall). Good if false alarms are costly.

High Recall: Finds most cats, but may include some dogs by mistake (lower precision). Good if missing cats is worse.

Example: For a wildlife camera, high recall helps find all cats. For a pet door that opens only for cats, high precision avoids letting dogs in.

Good vs Bad Metric Values for Feature Extraction

Good Features: Model shows balanced precision and recall above 80%, F1 score near 0.85 or higher, and accuracy above 85%. Confusion matrix has low false positives and false negatives.

Bad Features: Model struggles with precision or recall below 50%, F1 score below 0.6, and accuracy near random guessing (50% for two classes). Confusion matrix shows many mistakes.

Common Pitfalls in Metrics for Feature Extraction
  • Accuracy Paradox: High accuracy can hide poor feature quality if classes are imbalanced.
  • Data Leakage: Features accidentally include test info, inflating metrics falsely.
  • Overfitting: Features too tuned to training data cause great training metrics but poor real-world results.
  • Ignoring Class Balance: Not checking precision and recall can miss if features favor one class.
Self Check

Your model using extracted features has 98% accuracy but only 12% recall on the "cat" class. Is it good?

Answer: No. The model misses most cats (low recall), so features likely fail to capture cat traits well. High accuracy is misleading if most images are dogs. You need better features or balance the model.

Key Result
Good feature extraction leads to balanced precision and recall, ensuring the model captures key information for accurate predictions.