Why do we convert images from RGB to other color spaces like HSV or LAB in computer vision tasks?
Think about how different color spaces help isolate color features.
Color spaces like HSV separate hue and saturation from brightness, which helps algorithms focus on color without brightness interference.
What is the output of this Python code converting a pure red pixel to grayscale using OpenCV?
import cv2 import numpy as np red_pixel = np.array([[[0, 0, 255]]], dtype=np.uint8) gray_pixel = cv2.cvtColor(red_pixel, cv2.COLOR_BGR2GRAY) print(gray_pixel[0,0])
OpenCV uses a weighted sum for grayscale: 0.299*R + 0.587*G + 0.114*B.
For pure red (B=0,G=0,R=255), grayscale = 0.299*255 ≈ 76.
Which color space is generally best suited for detecting human skin tones robustly under varying lighting?
Consider which color space separates chromatic content from brightness.
HSV separates hue and saturation from brightness, making skin color detection more stable under different lighting.
When creating a mask to isolate a color range in HSV space, what happens if the threshold range for hue is too wide?
Think about what a wider hue range means for color selection.
A wider hue range captures more colors, including those not intended, which lowers the mask's precision.
Given this code snippet, what is the cause of the incorrect color conversion output?
import cv2 import numpy as np img = np.array([[[255, 0, 0]]], dtype=np.uint8) converted = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) print(converted[0,0])
OpenCV uses BGR by default; check the input color order.
OpenCV's default image format is BGR. The code provides RGB data but uses COLOR_RGB2HSV, which expects RGB input. However, the array is actually BGR order, causing wrong conversion.