Imagine you want to put virtual glasses on a person's face in a photo. What does face landmark detection help you find?
Think about what you need to place glasses correctly on a face.
Face landmark detection finds important points on the face, such as eyes, nose tip, and mouth corners. These points help align or modify the face in images.
What will be the output of this Python code snippet that uses a face landmark detector?
import numpy as np landmarks = np.array([[30, 40], [35, 45], [40, 50]]) print(landmarks.shape)
Check how many points and coordinates each point has.
The landmarks array has 3 points, each with 2 coordinates (x and y), so the shape is (3, 2).
You want to build a face landmark detector that works fast on a mobile phone. Which model architecture is best?
Think about speed and size for mobile devices.
MobileNetV2 is designed to be fast and small, making it ideal for mobile real-time tasks like face landmark detection.
Which metric best measures how close predicted face landmarks are to the true landmarks?
Think about measuring distance between points.
MSE calculates the average squared distance between predicted and true landmark points, directly measuring landmark accuracy.
You trained a face landmark model but it predicts landmarks shifted far from the face in test images. What is the most likely cause?
Think about coordinate scales and consistency between training and testing.
If training landmarks are normalized (e.g., scaled between 0 and 1) but test landmarks are not, predictions will be off-scale and appear shifted.