Background subtraction is a common technique in motion detection. What does it primarily help to achieve?
Think about how motion detection separates moving parts from the rest.
Background subtraction removes the static parts of the scene, leaving only moving objects visible for detection.
Consider this Python code using OpenCV for simple motion detection:
import cv2
cap = cv2.VideoCapture('video.mp4')
ret, frame1 = cap.read()
ret, frame2 = cap.read()
diff = cv2.absdiff(frame1, frame2)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 25, 255, cv2.THRESH_BINARY)
count = cv2.countNonZero(thresh)
print(count)What does the printed count represent?
Look at what cv2.countNonZero counts after thresholding the difference.
The code calculates the absolute difference between two frames, converts it to grayscale, thresholds it to highlight changes, and counts the pixels that changed significantly.
You want to detect motion in a video where lighting changes frequently, like clouds passing over the sun. Which approach is best?
Consider models that adapt to gradual background changes.
GMM background subtraction adapts to lighting changes by modeling the background as a mixture of Gaussians, making it robust to illumination variations.
You have ground truth masks showing where motion occurs and your algorithm's predicted masks. Which metric best measures how well your algorithm detects motion?
Think about how to compare predicted and actual motion regions.
IoU measures the overlap between predicted and true motion areas, providing a clear accuracy measure for detection masks.
Review this Python code snippet for motion detection:
import cv2
cap = cv2.VideoCapture('video.mp4')
ret, frame1 = cap.read()
while True:
ret, frame2 = cap.read()
diff = cv2.absdiff(frame1, frame2)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 30, 255, cv2.THRESH_BINARY)
count = cv2.countNonZero(thresh)
if count > 5000:
print('Motion detected')
frame1 = frame2
if cv2.waitKey(30) & 0xFF == 27:
break
cap.release()Why might this code miss some motion events?
Check what happens when the video ends or frame reading fails.
If ret is False, frame2 is None, causing cv2.absdiff to fail or produce invalid results, missing motion detection.