Which of the following best describes a common source of bias in AI models?
Think about what happens if the data the AI learns from is not balanced.
Bias often comes from training data that lacks fair representation of all groups, causing the AI to perform poorly or unfairly for some people.
You have an AI model that predicts loan approvals. The model approves 90% of applications from one group but only 50% from another. Which metric best helps identify this fairness issue?
Look for a metric that compares outcomes across different groups.
Demographic parity measures if different groups receive positive outcomes at similar rates, highlighting fairness issues.
You want to build an AI system that respects user privacy by not storing personal data. Which approach is best?
Consider methods that keep data on user devices instead of sending it to a central place.
Federated learning trains models on user devices without sending personal data to a central server, enhancing privacy.
Consider this pseudocode for deploying an AI chatbot:
if user_input contains sensitive_topic:
respond with generic answer
else:
respond with AI-generated answerWhat ethical risk does this code most likely introduce?
Think about how avoiding sensitive topics might affect users seeking help.
By giving generic answers to sensitive topics, the chatbot may fail to support users who need specific help, causing harm.
Which statement best explains the challenge of transparency in AI while maintaining security?
Consider what happens if you reveal everything about how an AI works.
Sharing full details of AI models can risk exposing private data or ways to attack the system, so transparency must be balanced with security.