In computer vision, features are used to find distinctive points in images. Why are features important for this task?
Think about what makes a point in an image easy to recognize compared to others.
Features describe unique local patterns like edges or corners that differ from nearby areas, making points distinctive and easier to match across images.
What is the output of the following code that detects corners using the Harris corner detector on a simple 5x5 image?
import numpy as np import cv2 image = np.array([ [0, 0, 0, 0, 0], [0, 255, 255, 255, 0], [0, 255, 0, 255, 0], [0, 255, 255, 255, 0], [0, 0, 0, 0, 0]], dtype=np.uint8) dst = cv2.cornerHarris(image, 2, 3, 0.04) result = (dst > 0.01 * dst.max()).astype(int) print(result)
Harris corners detect points where intensity changes sharply in multiple directions.
The output marks corners at the four points where the white square edges meet, resulting in 1s at those positions and 0s elsewhere.
You want to match distinctive points between two images taken from different angles and lighting. Which feature descriptor is best suited for this?
Consider which descriptor handles changes in scale and rotation well.
SIFT creates descriptors that are invariant to scale and rotation, making it ideal for matching points under different viewpoints and lighting.
Which metric best measures the quality of matching distinctive points between two images?
Think about a metric that checks if points correspond well between images.
Repeatability rate measures how consistently the same distinctive points are detected and matched, indicating good feature quality.
You apply a feature detector on a blurred image but get very few distinctive points. What is the most likely reason?
Consider how blurring affects edges and textures in images.
Blurring removes sharp edges and fine details, which are key for detecting distinctive points, so fewer points are found.