Motion detection with camera in Raspberry Pi - Time & Space Complexity
When using a camera on a Raspberry Pi to detect motion, the program checks many pixels repeatedly.
We want to know how the time it takes grows as the image size or number of frames increases.
Analyze the time complexity of the following code snippet.
import cv2
cap = cv2.VideoCapture(0)
while True:
ret, frame1 = cap.read()
ret, frame2 = cap.read()
diff = cv2.absdiff(frame1, frame2)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
if cv2.contourArea(contour) > 500:
print("Motion detected")
This code captures two frames repeatedly, compares them pixel by pixel to find differences, and checks for motion by looking at contours.
- Primary operation: Looping over each pixel to compute the difference and then looping over contours to check their area.
- How many times: The pixel comparison happens for every pixel in each frame pair, and the contour loop runs once per detected contour each frame.
As the image size grows, the number of pixels to compare grows too, so the work grows with the number of pixels.
| Input Size (pixels) | Approx. Operations |
|---|---|
| 10 x 10 (100) | About 100 pixel comparisons |
| 100 x 100 (10,000) | About 10,000 pixel comparisons |
| 1000 x 1000 (1,000,000) | About 1,000,000 pixel comparisons |
Pattern observation: The operations grow roughly in proportion to the number of pixels, so doubling width and height quadruples the work.
Time Complexity: O(n)
This means the time to detect motion grows roughly in direct proportion to the number of pixels in each frame.
[X] Wrong: "The time to detect motion depends only on the number of frames, not the image size."
[OK] Correct: Each frame has many pixels, and the program compares all pixels between frames, so bigger images take more time even if the frame count is the same.
Understanding how image size affects processing time helps you explain performance in real projects using cameras or sensors.
"What if we only compared a smaller region of the frame instead of the whole image? How would the time complexity change?"
