We use segmentation evaluation to check how well a model finds the exact shapes in images. IoU and Dice scores tell us how close the model's guess is to the real shape.
0
0
Segmentation evaluation (IoU, Dice) in Computer Vision
Introduction
When you want to measure how well a model separates objects from the background in photos.
When comparing different models to see which one finds shapes more accurately.
When improving a model that detects tumors or organs in medical images.
When checking if a self-driving car correctly identifies road lanes or obstacles.
When validating models that segment animals or plants in nature photos.
Syntax
Computer Vision
def iou_score(pred_mask, true_mask): intersection = (pred_mask & true_mask).sum() union = (pred_mask | true_mask).sum() return intersection / union if union != 0 else 1.0 def dice_score(pred_mask, true_mask): intersection = (pred_mask & true_mask).sum() total = pred_mask.sum() + true_mask.sum() return 2 * intersection / total if total != 0 else 1.0
pred_mask and true_mask are binary masks (arrays of True/False or 1/0) of the same size.
IoU is also called Jaccard Index; Dice is similar but gives more weight to overlap.
Examples
This example calculates IoU for small masks where some pixels overlap.
Computer Vision
pred_mask = [0, 1, 1, 0] true_mask = [1, 1, 0, 0] iou = iou_score(pred_mask, true_mask)
Dice score is 1.0 here because the prediction perfectly matches the truth.
Computer Vision
pred_mask = [1, 1, 1, 1] true_mask = [1, 1, 1, 1] dice = dice_score(pred_mask, true_mask)
Sample Model
This program calculates IoU and Dice scores for two small example masks. It shows how much the predicted shape overlaps with the true shape.
Computer Vision
import numpy as np def iou_score(pred_mask, true_mask): intersection = np.logical_and(pred_mask, true_mask).sum() union = np.logical_or(pred_mask, true_mask).sum() return intersection / union if union != 0 else 1.0 def dice_score(pred_mask, true_mask): intersection = np.logical_and(pred_mask, true_mask).sum() total = pred_mask.sum() + true_mask.sum() return 2 * intersection / total if total != 0 else 1.0 # Example masks pred_mask = np.array([[0, 1, 1], [0, 1, 0], [0, 0, 0]], dtype=bool) true_mask = np.array([[1, 1, 0], [0, 1, 0], [0, 0, 0]], dtype=bool) print(f"IoU score: {iou_score(pred_mask, true_mask):.2f}") print(f"Dice score: {dice_score(pred_mask, true_mask):.2f}")
OutputSuccess
Important Notes
IoU and Dice scores range from 0 to 1, where 1 means perfect overlap.
Always use masks of the same size and shape for correct evaluation.
Dice score tends to be higher than IoU for the same masks because it doubles the intersection.
Summary
IoU and Dice are simple ways to measure how well a model finds shapes in images.
They compare predicted and true masks by looking at overlap and total area.
Higher scores mean better segmentation quality.