0
0
Computer Visionml~8 mins

Why video extends CV to temporal data in Computer Vision - Why Metrics Matter

Choose your learning style9 modes available
Metrics & Evaluation - Why video extends CV to temporal data
Which metric matters for this concept and WHY

When working with video data in computer vision, metrics that capture both spatial and temporal accuracy matter. For example, frame-level accuracy measures how well the model predicts each frame, while temporal consistency metrics check if predictions are smooth and logical over time. This is important because video adds a time dimension, so the model must not only recognize objects but also track changes and motion across frames.

Confusion matrix or equivalent visualization (ASCII)
Frame 1 Prediction:  Object A detected (TP), Object B missed (FN)
Frame 2 Prediction:  Object A detected (TP), Object B detected (TP)
Frame 3 Prediction:  Object A missed (FN), Object B detected (TP)

Confusion Matrix over frames:
          Predicted
          A    B    None
Actual A  2    0    1
Actual B  0    2    1

TP = 4 (correct detections)
FP = 0 (no false alarms)
FN = 2 (missed objects)
TN = Not applicable in detection but could be background correctly ignored
Precision vs Recall tradeoff with concrete examples

In video tasks, precision means how many detected objects are actually correct, and recall means how many real objects were found. For example, in surveillance video, high recall is important to not miss any suspicious activity, even if some false alarms happen (lower precision). In contrast, for video editing tools, high precision is preferred to avoid marking wrong objects, even if some objects are missed (lower recall).

What "good" vs "bad" metric values look like for this use case

A good video model has high frame-level accuracy (e.g., >90%) and smooth temporal predictions with low flickering. Precision and recall should both be balanced, for example, precision around 0.85 and recall around 0.85 or higher. Bad models have low recall (missing objects over time) or low precision (many false detections), and inconsistent predictions that jump around between frames.

Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)

One pitfall is relying only on frame-level accuracy without checking temporal consistency, which can hide flickering predictions. Another is data leakage if training and test videos overlap in time or scene, inflating metrics. Overfitting can show as perfect accuracy on training videos but poor generalization to new videos with different motion or lighting.

Your model has 98% accuracy but 12% recall on fraud. Is it good?

In video tasks, if your model has very high accuracy but very low recall, it means it misses many real events (fraud or objects). This is not good because missing important events is risky. You want to improve recall even if accuracy drops a bit, to catch more true cases over time.

Key Result
Video models need metrics that measure both frame accuracy and temporal consistency to ensure reliable detection over time.