Transparency in computer vision models means making clear how the model works and what data it uses. Why does this help prevent misuse?
Think about how knowing what a model does helps people avoid mistakes.
Transparency helps users see how decisions are made, which reduces risks of unfair or harmful outcomes by exposing biases or errors.
Computer vision systems often use images of people. How does respecting data privacy help prevent misuse?
Think about what happens if personal photos are shared without permission.
Respecting privacy means personal data is handled carefully, reducing risks like identity theft or surveillance misuse.
You want to check if a face recognition model works equally well for different skin tones. Which metric is best to detect bias?
Think about comparing performance across different groups.
Measuring accuracy separately for each group reveals if the model favors some groups over others, indicating bias.
Consider a computer vision model trained only on daytime images but used at night. What problem arises?
Think about how training data affects model performance on new types of images.
Using a model on data very different from training causes errors because it has not learned to handle those conditions.
You want a computer vision model that is easy to explain and audit to avoid misuse. Which model type is best?
Think about which model type is easiest to understand and explain.
Simple models like decision trees are easier to interpret, making it simpler to detect and prevent misuse.