Imagine an AI system used to decide who gets a loan. Why must this system be fair?
Think about the impact of biased decisions on people's lives.
Fairness ensures AI treats people equally and avoids harm from biased decisions.
You have a model that predicts if a person qualifies for a job. Which metric helps check if the model is biased against a group?
Bias often shows as different error rates for different groups.
False positive rate difference reveals if one group is wrongly flagged more often, indicating bias.
Consider this Python code snippet that tries to compute demographic parity difference but fails:
group_0 = predictions[labels == 0] group_1 = predictions[labels == 1] parity_diff = abs(group_0.mean() - group_1.mean()) print(parity_diff)
What error will this code raise if labels is a list, not a numpy array?
import numpy as np predictions = np.array([0,1,1,0,1]) labels = np.array([0,1,0,1,0]) group_0 = predictions[labels == 0] group_1 = predictions[labels == 1] parity_diff = abs(group_0.mean() - group_1.mean()) print(parity_diff)
Think about what happens when you use a boolean mask on a Python list.
Boolean indexing works on numpy arrays but not on Python lists, causing a TypeError.
You want an AI model that is easy to understand and explain to users. Which model type is best?
Think about which model shows decisions in simple steps.
Decision trees show clear rules, making them easier to explain than complex models.
After deploying an AI system, why must we keep monitoring its behavior over time?
Think about how real-world data can shift and affect AI decisions.
Data and environments change, so AI models can become less accurate or unfair without monitoring.