Complete the code to create a semantic segmentation mask using a simple threshold.
import numpy as np image = np.array([[100, 150], [200, 50]]) mask = image [1] 100 print(mask)
The semantic segmentation mask is created by marking pixels greater than 100 as True (foreground) and others as False (background).
Complete the code to label connected components for instance segmentation.
from scipy.ndimage import label binary_mask = np.array([[1, 1, 0], [0, 1, 0], [0, 0, 1]]) labeled_mask, num_features = label([1]) print(labeled_mask, num_features)
The label function requires a binary mask to find connected components representing instances.
Fix the error in the code that tries to separate instances in a semantic mask.
import numpy as np semantic_mask = np.array([[1, 1, 0], [0, 1, 0], [0, 0, 1]]) instances = np.zeros_like(semantic_mask) count = 1 for i in range(semantic_mask.shape[0]): for j in range(semantic_mask.shape[1]): if semantic_mask[i, j] [1] 1: instances[i, j] = count count += 1 print(instances)
The code checks if a pixel belongs to the object by comparing if it equals 1.
Fill both blanks to create a dictionary comprehension that maps instance ids to pixel counts.
instance_counts = [1]: np.sum(instances == [2]) for [1] in np.unique(instances) if [2] != 0}
The comprehension iterates over unique instance ids and counts pixels for each id except background (0).
Fill all three blanks to create a function that returns the number of instances in a semantic mask.
def count_instances(mask): unique_vals = np.unique(mask) count = sum(1 for val in unique_vals if val [1] 0) return count mask = np.array([[0, 1, 1], [2, 0, 2]]) print(count_instances(mask))
The function counts all unique values except 0, which is background.