Which of the following best describes bias in an AI system?
Think about how data quality affects AI decisions.
Bias in AI happens when the data used to train the system is incomplete or unfair, causing the AI to make unfair or incorrect decisions.
Consider an AI system trained to approve loan applications. It was trained mostly on data from one city. What is the likely result when it evaluates applications from a different city?
Think about how training data affects AI predictions on new data.
If the AI only learned from one city's data, it might not understand the different patterns from another city, causing unfair rejections.
Which of the following scenarios raises an ethical concern about AI use?
Consider fairness and discrimination in AI decisions.
Using gender and age alone to recommend job candidates can lead to unfair discrimination, which is an ethical issue.
Which data set is best suited to train an AI system to fairly recognize faces of all people?
Think about diversity and representation in training data.
Diverse data helps AI learn to recognize all types of faces fairly, reducing bias.
You are designing an AI system for loan approvals. Which approach best reduces bias and promotes fairness?
Consider how to ensure fairness and accountability in AI.
Using diverse data, testing for bias, and including human review helps create fair and ethical AI systems.