0
0
Computer Visionml~8 mins

Action recognition basics in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Action recognition basics
Which metric matters for Action Recognition and WHY

In action recognition, we want to know how well the model identifies the correct action from videos or image sequences. The main metric is accuracy, which tells us the percentage of correctly predicted actions out of all attempts.

However, accuracy alone can be misleading if some actions happen more often than others. So, we also use precision and recall for each action class to understand if the model is good at finding the right actions without too many mistakes.

Precision tells us: When the model says an action happened, how often is it right?

Recall tells us: Out of all times an action actually happened, how many did the model find?

Finally, the F1 score balances precision and recall, giving a single number to compare models.

Confusion Matrix Example

Imagine a model recognizing three actions: Walking, Running, and Jumping. Here is a confusion matrix showing predictions vs actual actions:

          Predicted
          W   R   J
    A  W 50  2   3
    c  R  4 45   1
    t  J  2  3  40
    u
    a
    l
    

Explanation:

  • 50 times the model correctly predicted Walking (True Positives for Walking)
  • 2 times it predicted Running when it was actually Walking (False Positives for Running)
  • 3 times it predicted Jumping when it was actually Walking (False Positives for Jumping)
  • And so on for other actions.
Precision vs Recall Tradeoff with Examples

In action recognition, sometimes we want to catch every instance of an action (high recall). For example, in security, missing a suspicious action is bad.

Other times, we want to be sure the detected action is really correct (high precision). For example, in sports analysis, wrongly labeling a move can confuse coaches.

Improving recall may lower precision because the model guesses more actions, including wrong ones. Improving precision may lower recall because the model is more careful and misses some actions.

Choosing which to prioritize depends on the use case.

What Good vs Bad Metric Values Look Like

Good metrics:

  • Accuracy above 85% means the model is mostly correct.
  • Precision and recall above 80% for each action means the model finds actions well and is usually right.
  • F1 scores close to precision and recall show balance.

Bad metrics:

  • Accuracy below 60% means many wrong predictions.
  • Precision very low (e.g., 40%) means many false alarms.
  • Recall very low (e.g., 30%) means many missed actions.
  • Big gaps between precision and recall show the model is biased toward guessing or being too cautious.
Common Pitfalls in Metrics for Action Recognition
  • Accuracy paradox: If one action is very common, a model guessing only that action can have high accuracy but poor usefulness.
  • Data leakage: If training and test videos overlap, metrics look better but the model won't work well on new videos.
  • Overfitting: Very high training accuracy but low test accuracy means the model memorized training videos, not learned actions.
  • Ignoring class imbalance: Not checking precision and recall per action hides poor performance on rare actions.
Self Check

Your action recognition model has 98% accuracy but only 12% recall on the action "Running." Is it good for production?

Answer: No, it is not good. The model misses most "Running" actions (low recall), even if overall accuracy is high. This means it often fails to detect "Running," which could be critical depending on the application.

Key Result
In action recognition, balanced precision and recall per action class are key to reliable performance beyond overall accuracy.