0
0
Computer Visionml~8 mins

IoU (Intersection over Union) in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - IoU (Intersection over Union)
Which metric matters for IoU and WHY

IoU, or Intersection over Union, measures how much two shapes overlap. In computer vision, it tells us how well a model's predicted box matches the true box around an object. The higher the IoU, the better the prediction. It matters because it directly shows the quality of object detection or segmentation.

Confusion matrix or equivalent visualization

IoU is not a confusion matrix but a ratio:

    IoU = Area of Overlap / Area of Union

    For example:
    - Predicted box area = 40
    - Ground truth box area = 50
    - Overlap area = 30

    IoU = 30 / (40 + 50 - 30) = 30 / 60 = 0.5
    

This means the predicted box covers half of the combined area of prediction and truth.

Precision vs Recall tradeoff with IoU

IoU helps decide if a detection counts as correct. Usually, a threshold like 0.5 is set:

  • If IoU ≥ 0.5, prediction is a true positive (good match).
  • If IoU < 0.5, prediction is a false positive (bad match).

Raising the threshold means stricter matching (higher precision, fewer false positives) but may miss some true objects (lower recall). Lowering it catches more objects (higher recall) but risks more wrong matches (lower precision).

What good vs bad IoU values look like

Good IoU values are close to 1.0, meaning predicted and true boxes almost perfectly overlap.

  • IoU ≥ 0.75: Excellent detection
  • 0.5 ≤ IoU < 0.75: Acceptable detection
  • IoU < 0.5: Poor detection, likely a false positive

For example, an IoU of 0.8 means the prediction is very accurate. An IoU of 0.3 means the prediction barely overlaps the true object.

Common pitfalls with IoU
  • Using IoU alone ignores confidence scores; a high IoU but low confidence prediction might be ignored.
  • IoU threshold choice affects evaluation; too high can unfairly penalize good detections.
  • Small objects can have low IoU even with small shifts, making evaluation tricky.
  • IoU does not capture class correctness; a box can overlap well but predict the wrong object type.
Self-check question

Your object detection model has an average IoU of 0.4 on test images. Is this good? Why or why not?

Answer: No, 0.4 IoU is below the common threshold of 0.5, meaning predictions often poorly overlap true objects. The model likely misses or poorly locates objects and needs improvement.

Key Result
IoU measures overlap quality between predicted and true object regions; values above 0.5 usually indicate acceptable detection.