Which of the following best describes AI bias?
Think about how data affects AI decisions.
AI bias happens when the data used to train AI is incomplete or unfair, causing the AI to make biased decisions.
Which of these is a common source of bias in AI systems?
Consider what happens if AI learns from limited or narrow data.
Training AI only on data from one group or region can cause bias because the AI does not learn about other groups.
An AI chatbot gives different answers to similar questions depending on the user's name. What does this indicate?
Think about how user identity might affect AI responses.
If AI changes answers based on user identity like names, it may be biased toward or against certain groups.
What is a likely effect of training an AI system on biased data?
Consider how training data influences AI behavior.
Biased training data causes AI to learn unfair patterns, leading to inaccurate or unfair results for some groups.
Which approach is most effective to reduce AI bias?
Think about how to make AI fair and balanced.
Using diverse data and testing AI regularly helps find and fix bias, making AI fairer.