0
0
Prompt Engineering / GenAIml~8 mins

Video understanding basics in Prompt Engineering / GenAI - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Video understanding basics
Which metric matters for Video Understanding and WHY

Video understanding means teaching a computer to watch videos and know what is happening. The main goal is to recognize actions, objects, or events in the video.

The key metrics are Accuracy, Precision, Recall, and F1-score. These tell us how well the model identifies the right actions or objects without mistakes.

For example, if the model says "someone is running" in a video, precision tells us how often it was right when it said that. Recall tells us how many times it found all the running moments in the video. F1-score balances both.

Sometimes, we also use Mean Average Precision (mAP) for detecting multiple objects or actions in videos. It measures how well the model finds all correct items and avoids wrong ones.

Confusion Matrix for Video Action Recognition
        Predicted
        Run   Walk   Jump
    Run   50     5      2
    Walk  3      45     4
    Jump  1      2      40

    True labels total = 50+5+2+3+45+4+1+2+40 = 152
    

From this matrix:

  • True Positives (TP) for "Run" = 50
  • False Positives (FP) for "Run" = 3 + 1 = 4 (times model said "Run" but it was "Walk" or "Jump")
  • False Negatives (FN) for "Run" = 5 + 2 = 7 (times model missed "Run")
  • True Negatives (TN) = total - TP - FP - FN = 152 - 50 - 4 - 7 = 91

We use these to calculate precision, recall, and F1 for each action.

Precision vs Recall Tradeoff in Video Understanding

Imagine a security camera that detects suspicious actions like "falling".

  • High Precision: When the model says "falling", it is almost always correct. This avoids false alarms.
  • High Recall: The model finds almost every actual fall, even if it sometimes makes mistakes.

If we want to avoid missing any falls (important for safety), we focus on high recall.

If we want to avoid bothering people with false alarms, we focus on high precision.

Balancing both with F1-score helps find a good middle ground.

What Good vs Bad Metrics Look Like for Video Understanding

Good metrics:

  • Accuracy above 85% on recognizing actions
  • Precision and recall both above 80%
  • F1-score close to precision and recall, showing balance
  • Mean Average Precision (mAP) above 75% for object detection in videos

Bad metrics:

  • High accuracy but very low recall (model misses many actions)
  • High recall but very low precision (model makes many false detections)
  • F1-score much lower than precision or recall, showing imbalance
  • mAP below 50%, meaning poor detection quality
Common Pitfalls in Video Understanding Metrics
  • Accuracy Paradox: Videos often have many frames with no action. A model that always says "no action" can have high accuracy but is useless.
  • Data Leakage: Using parts of the same video in training and testing can make metrics look better than reality.
  • Overfitting: Model performs well on training videos but poorly on new videos, showing metrics that don't generalize.
  • Ignoring Class Imbalance: Some actions happen rarely. Metrics must consider this to avoid misleading results.
Self-Check Question

Your video action recognition model has 98% accuracy but only 12% recall on "falling" events. Is it good for safety monitoring? Why or why not?

Answer: No, it is not good. The model misses 88% of actual falls (low recall), which is dangerous for safety. High accuracy is misleading because most frames have no falls. Improving recall is critical here.

Key Result
In video understanding, balanced precision and recall (measured by F1-score) are key to reliable action detection, not just accuracy.