Which of the following is the most common source of bias in AI systems?
Think about what teaches the AI how to make decisions.
Bias in AI often comes from the data it learns from. If the training data has unfair or unbalanced information, the AI will reflect those biases.
What is a likely real-world consequence if an AI system used for hiring favors one group over others?
Consider what happens if the AI prefers some people unfairly.
If the AI favors one group, it can lead to discrimination, where some people do not get fair chances for jobs.
An AI system used for loan approvals consistently denies loans to applicants from a particular neighborhood. What is the most likely explanation?
Think about what the AI learned from before making decisions.
The AI likely learned from data that unfairly associates that neighborhood with risk, causing biased loan denials.
Which approach is most effective in reducing bias in AI systems?
Think about how the AI learns and what affects its fairness.
Diverse and representative data helps the AI learn fairly about all groups, reducing bias in its decisions.
Consider an AI system used in criminal justice that has biased predictions against a minority group. What is a likely long-term societal consequence?
Think about how unfair treatment affects society over time.
Biased AI in justice can deepen inequalities and cause communities to lose faith in fairness and institutions.