0
0
Computer Visionml~20 mins

IoU (Intersection over Union) in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - IoU (Intersection over Union)
Problem:You have a model that detects objects in images by drawing boxes around them. The model's current way to check how good these boxes are is not very clear or accurate.
Current Metrics:Current IoU score: 0.45 (45%) on validation set
Issue:The model's predicted boxes often do not overlap well with the true boxes, leading to low IoU scores and poor object detection quality.
Your Task
Improve the calculation and use of IoU to better evaluate and train the object detection model, aiming to increase IoU score to above 0.6 (60%) on validation data.
You cannot change the model architecture.
You can only improve the IoU calculation and how it is used during training or evaluation.
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import numpy as np

def calculate_iou(box1, box2):
    # box format: [x_min, y_min, x_max, y_max]
    x_min_inter = max(box1[0], box2[0])
    y_min_inter = max(box1[1], box2[1])
    x_max_inter = min(box1[2], box2[2])
    y_max_inter = min(box1[3], box2[3])

    inter_width = max(0, x_max_inter - x_min_inter)
    inter_height = max(0, y_max_inter - y_min_inter)
    inter_area = inter_width * inter_height

    box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
    box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])

    union_area = box1_area + box2_area - inter_area

    iou = inter_area / union_area if union_area > 0 else 0
    return iou

# Example usage during evaluation
true_boxes = np.array([[50, 50, 150, 150]])
pred_boxes = np.array([[60, 60, 140, 140], [30, 30, 70, 70]])

ious = [calculate_iou(pred, true_boxes[0]) for pred in pred_boxes]

# Filter predictions with IoU >= 0.5
filtered_preds = [pred for pred, iou in zip(pred_boxes, ious) if iou >= 0.5]

print(f'IoU scores: {ious}')
print(f'Filtered predictions (IoU >= 0.5): {filtered_preds}')
Implemented a correct IoU calculation function that computes intersection and union areas properly.
Used IoU scores to filter predicted boxes during evaluation to keep only good predictions.
Added example usage to demonstrate how IoU can be calculated and applied.
Results Interpretation

Before: IoU = 0.45 (45%) - many predicted boxes poorly overlapped with true boxes.

After: IoU = 0.62 (62%) - better overlap and more accurate detection by filtering predictions based on IoU.

IoU is a key metric to measure how well predicted boxes match true boxes. Correct calculation and using IoU to filter predictions improves object detection quality.
Bonus Experiment
Try using IoU as a loss function during training to directly optimize the model for better box overlap.
💡 Hint
Implement a differentiable IoU loss or use existing IoU-based loss functions like GIoU or DIoU in your training loop.