Which of the following best describes the purpose of feature matching between images in computer vision?
Think about why we want to find points that look similar in two different pictures.
Feature matching helps identify points that correspond between images, which is essential for tasks like stitching images or 3D reconstruction.
What is the output of the following Python code snippet using OpenCV for feature matching?
import cv2 img1 = cv2.imread('image1.jpg', 0) img2 = cv2.imread('image2.jpg', 0) sift = cv2.SIFT_create() kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) good = [] for m, n in matches: if m.distance < 0.75 * n.distance: good.append(m) print(len(good))
Look at what is printed at the end of the code.
The code prints the length of the list 'good', which contains matches that passed the ratio test, so it outputs the number of good matches.
You want to match features between two images taken under different lighting conditions and slight rotations. Which feature detector is most suitable?
Consider which detector is robust to scale, rotation, and lighting changes.
SIFT is designed to be invariant to scale and rotation and robust to lighting changes, making it ideal for matching features under such conditions.
Which metric is commonly used to evaluate the quality of feature matching between two images?
Think about how to measure how many matches are geometrically consistent.
The number of inlier matches after RANSAC filtering indicates how many matches agree with a geometric model, reflecting match quality.
Consider this code snippet for feature matching. What error will it raise when run?
import cv2 img1 = cv2.imread('img1.jpg', 0) img2 = cv2.imread('img2.jpg', 0) sift = cv2.SIFT_create() kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) bf = cv2.BFMatcher() matches = bf.match(des1, des2) print(matches[0].queryIdx)
Check what happens if images are not loaded properly.
If the images are not found or loaded, img1 or img2 will be None, causing detectAndCompute to fail with an AttributeError.