Which of the following is the most common source of bias in face recognition systems?
Think about what causes the model to perform differently on various groups.
Bias often arises because some groups have fewer examples in the training data, causing the model to learn less about them.
You have a face recognition model tested on two groups: Group A and Group B. The false positive rate (FPR) for Group A is 2%, and for Group B is 10%. What does this indicate about the model's fairness?
Higher false positive rates mean more mistakes for that group.
A higher false positive rate for Group B means the model wrongly identifies more people from Group B, indicating bias against that group.
What is the output of the following Python code snippet that normalizes face embeddings?
import numpy as np embeddings = np.array([[3, 4], [0, 0], [1, 1]]) norms = np.linalg.norm(embeddings, axis=1, keepdims=True) normalized = embeddings / norms print(np.round(normalized, 2))
Consider what happens when dividing by zero norm.
The second embedding is [0,0], so its norm is zero, causing division by zero and resulting in NaN values.
Which model architecture choice is best to help reduce bias in face recognition across diverse populations?
Think about dataset balance and fairness-aware training.
Using a balanced dataset and incorporating fairness constraints helps the model learn equally across groups, reducing bias.
A face recognition system shows 90% accuracy on Group X and 60% accuracy on Group Y. The training data is balanced. Which of the following is the most likely cause?
Consider how the loss function affects learning fairness.
If the loss function treats all errors equally without group awareness, the model may still perform worse on some groups despite balanced data.