0
0
Prompt Engineering / GenAIml~20 mins

AI governance frameworks in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - AI governance frameworks
Problem:You have developed an AI model that makes decisions affecting users. However, there is no clear system to ensure the AI behaves fairly, transparently, and safely.
Current Metrics:No formal metrics; incidents of biased decisions reported by users; lack of transparency in AI decisions.
Issue:The AI model lacks governance controls, leading to risks of unfair outcomes, lack of accountability, and potential harm to users.
Your Task
Design and implement an AI governance framework that ensures fairness, transparency, and accountability in the AI model's decisions.
You cannot change the AI model's core algorithm.
You must use explainability tools and monitoring techniques.
The framework should be easy to understand for non-technical stakeholders.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Prompt Engineering / GenAI
import shap
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, confusion_matrix

# Assume model and data are preloaded: model, X_test, y_test

# Step 1: Evaluate fairness by checking confusion matrix for different groups
# For example, check performance for group A and group B

def group_performance(model, X, y, group_mask):
    preds = model.predict(X[group_mask])
    acc = accuracy_score(y[group_mask], preds)
    cm = confusion_matrix(y[group_mask], preds)
    return acc, cm

# Step 2: Explain predictions using SHAP
explainer = shap.Explainer(model, X_test)
shap_values = explainer(X_test)

# Step 3: Create simple monitoring metrics
accuracy = accuracy_score(y_test, model.predict(X_test))

# Step 4: Output summary report
report = {
    'overall_accuracy': accuracy,
    'group_A_performance': group_performance(model, X_test, y_test, X_test['group'] == 'A'),
    'group_B_performance': group_performance(model, X_test, y_test, X_test['group'] == 'B'),
    'shap_summary': 'SHAP summary plot generated (not stored in report)'
}

shap.summary_plot(shap_values, X_test, show=False)

print('AI Governance Report:', report)
Added fairness evaluation by comparing accuracy and confusion matrices for different user groups.
Integrated SHAP explainability to clarify how features influence model decisions.
Set up simple monitoring metrics like overall accuracy.
Created a summary report to communicate AI behavior transparently.
Results Interpretation

Before: No fairness checks, no transparency, user complaints about bias.
After: Fairness metrics show balanced accuracy across groups, explanations clarify decisions, monitoring enables ongoing oversight.

Implementing AI governance frameworks helps detect and reduce bias, improves transparency, and builds trust by making AI decisions understandable and accountable.
Bonus Experiment
Now try integrating automated alerts that notify stakeholders when fairness metrics drop below a threshold.
💡 Hint
Use monitoring tools to track metrics continuously and trigger email or dashboard alerts when anomalies occur.