0
0
Computer Visionml~20 mins

Fairness in face recognition in Computer Vision - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fairness Face Recognition Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Bias Sources in Face Recognition

Which of the following is the most common source of bias in face recognition systems?

AUsing a high learning rate during training
BApplying too many convolutional layers in the neural network
CUnequal representation of demographic groups in the training data
DUsing grayscale images instead of color images
Attempts:
2 left
💡 Hint

Think about what causes the model to perform differently on various groups.

Metrics
intermediate
2:00remaining
Evaluating Fairness Metrics

You have a face recognition model tested on two groups: Group A and Group B. The false positive rate (FPR) for Group A is 2%, and for Group B is 10%. What does this indicate about the model's fairness?

AThe model is biased against Group B because it has a higher false positive rate
BThe model is fair because both FPRs are below 15%
CThe model is biased against Group A because it has a lower false positive rate
DThe false positive rate does not relate to fairness
Attempts:
2 left
💡 Hint

Higher false positive rates mean more mistakes for that group.

Predict Output
advanced
2:00remaining
Output of Face Embedding Normalization

What is the output of the following Python code snippet that normalizes face embeddings?

Computer Vision
import numpy as np
embeddings = np.array([[3, 4], [0, 0], [1, 1]])
norms = np.linalg.norm(embeddings, axis=1, keepdims=True)
normalized = embeddings / norms
print(np.round(normalized, 2))
A
[[0.6 0.8]
 [0.  0. ]
 [0.71 0.71]]
B
[[0.6 0.8]
 [nan nan]
 [0.71 0.71]]
C
[[0.6 0.8]
 [0.  0. ]
 [0.5 0.5]]
D
[[0.6 0.8]
 [nan nan]
 [0.5 0.5]]
Attempts:
2 left
💡 Hint

Consider what happens when dividing by zero norm.

Model Choice
advanced
2:00remaining
Choosing a Model Architecture to Reduce Bias

Which model architecture choice is best to help reduce bias in face recognition across diverse populations?

AA simple logistic regression model trained on raw pixel values
BA deep CNN trained only on a large dataset from one ethnicity
CA deep CNN trained on unbalanced data without any fairness considerations
DA shallow CNN trained on a balanced dataset with demographic labels used for fairness constraints
Attempts:
2 left
💡 Hint

Think about dataset balance and fairness-aware training.

🔧 Debug
expert
3:00remaining
Debugging Disparate Impact in Face Recognition

A face recognition system shows 90% accuracy on Group X and 60% accuracy on Group Y. The training data is balanced. Which of the following is the most likely cause?

AThe model uses a loss function that does not penalize errors differently across groups
BThe model architecture is too complex for the task
CThe training data contains mislabeled images for Group X only
DThe model was trained with early stopping
Attempts:
2 left
💡 Hint

Consider how the loss function affects learning fairness.