0
0
Computer Visionml~10 mins

Fairness in face recognition in Computer Vision - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load a face recognition dataset.

Computer Vision
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=[1])
Drag options to blanks, or click blank then click option'
A100
B50
C70
D30
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing too low a number results in too few samples per person.
2fill in blank
medium

Complete the code to split data into training and testing sets.

Computer Vision
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, test_size=[1], random_state=42)
Drag options to blanks, or click blank then click option'
A0.2
B0.25
C0.5
D0.3
Attempts:
3 left
💡 Hint
Common Mistakes
Using too large test size reduces training data.
3fill in blank
hard

Fix the error in the model training code by completing the missing classifier.

Computer Vision
from sklearn.svm import [1]
model = [1](class_weight='balanced')
model.fit(X_train, y_train)
Drag options to blanks, or click blank then click option'
ALinearSVC
BSVR
CSVC
DNuSVC
Attempts:
3 left
💡 Hint
Common Mistakes
Using SVR which is for regression, not classification.
4fill in blank
hard

Fill both blanks to compute accuracy and balanced accuracy for fairness evaluation.

Computer Vision
from sklearn.metrics import [1], [2]
acc = [1](y_test, y_pred)
bal_acc = [2](y_test, y_pred)
Drag options to blanks, or click blank then click option'
Aaccuracy_score
Bbalanced_accuracy_score
Cf1_score
Droc_auc_score
Attempts:
3 left
💡 Hint
Common Mistakes
Using metrics that do not handle imbalance well.
5fill in blank
hard

Fill all three blanks to create a dictionary of fairness metrics by group.

Computer Vision
fairness_metrics = {group: [1](y_true[group], y_pred[group]) for group in groups if len(y_true[group]) > 0}

# Use [2] to measure fairness
# Use [3] to measure overall accuracy
Drag options to blanks, or click blank then click option'
Abalanced_accuracy_score
Baccuracy_score
Cf1_score
Droc_auc_score
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up metrics or using ones not suitable for group fairness.