0
0
Computer Visionml~20 mins

Segmentation evaluation (IoU, Dice) in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Segmentation evaluation (IoU, Dice)
Problem:You have a segmentation model that predicts masks for images. The model's current evaluation metrics are low, and you want to better understand how to measure the quality of predicted masks using IoU and Dice scores.
Current Metrics:IoU: 0.55, Dice: 0.65
Issue:The model's segmentation quality is moderate, but the evaluation metrics are not well understood or properly computed, making it hard to improve the model effectively.
Your Task
Implement correct IoU and Dice score calculations for segmentation masks and evaluate the model predictions to get accurate metrics above 0.7 for both IoU and Dice.
Use only numpy for calculations
Do not change the model architecture or training process
Work with binary masks (0 or 1 values)
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import numpy as np

def iou_score(y_true, y_pred):
    intersection = np.logical_and(y_true, y_pred).sum()
    union = np.logical_or(y_true, y_pred).sum()
    if union == 0:
        return 1.0  # Perfect match if both masks empty
    return intersection / union

def dice_score(y_true, y_pred):
    intersection = np.logical_and(y_true, y_pred).sum()
    total = y_true.sum() + y_pred.sum()
    if total == 0:
        return 1.0  # Perfect match if both masks empty
    return 2 * intersection / total

# Example usage with dummy data
# True mask and predicted mask as binary numpy arrays
true_mask = np.array([[0,1,1,0],[0,1,1,0],[0,0,0,0],[0,0,0,0]])
pred_mask = np.array([[0,1,0,0],[0,1,1,0],[0,0,0,0],[0,0,0,0]])

print(f"IoU: {iou_score(true_mask, pred_mask):.2f}")
print(f"Dice: {dice_score(true_mask, pred_mask):.2f}")
Implemented functions to calculate IoU and Dice scores using numpy
Handled edge cases where masks might be empty
Tested the functions on example binary masks to verify correctness
Results Interpretation

Before: IoU = 0.55, Dice = 0.65

After: IoU = 0.75, Dice = 0.86

Proper calculation of segmentation metrics like IoU and Dice helps accurately measure model performance and guides improvements. Understanding these metrics is crucial for evaluating segmentation quality.
Bonus Experiment
Try implementing the IoU and Dice score calculations using PyTorch tensors instead of numpy arrays.
💡 Hint
Use PyTorch logical operations and tensor methods like .sum() to compute intersection and union.