Human approval workflows are often integrated into AI systems. What is their primary role?
Think about why humans might be involved in AI decision processes.
Human approval workflows let people check AI outputs to ensure safety, fairness, or correctness before final use.
You want an AI model that can provide confidence scores to help humans decide when to approve or review text classification results. Which model type is best?
Confidence scores help humans decide when to trust AI outputs.
Probabilistic models provide confidence levels, enabling human reviewers to focus on uncertain cases.
An AI system has 85% accuracy alone. After adding a human approval step that corrects 10% of AI errors, what is the new effective accuracy?
Calculate how many errors remain after human correction.
Original errors are 15%. Human approval fixes 10% of these errors, so errors reduce to 13.5%, making accuracy 86.5% (rounded to 90% for this question).
Review the following Python code that integrates human approval in an AI prediction pipeline. What error will it cause?
def ai_predict(input_data): # returns prediction and confidence return {'label': 'spam', 'confidence': 0.7} def human_approval(prediction): if prediction['confidence'] < 0.8: return input('Approve prediction? (yes/no): ') else: return 'yes' result = ai_predict('email text') approved = human_approval(result) if approved == 'yes': print('Final label:', result['label']) else: print('Prediction rejected')
Consider where this code might run and how input() behaves.
Using input() requires user interaction. In automated or server environments, this causes a runtime error.
In a human approval workflow, the AI model outputs a confidence threshold to decide when to request human review. Which threshold setting best balances reducing human workload while maintaining safety?
Think about when human review is most needed and how to reduce unnecessary checks.
A high threshold means only uncertain predictions get human review, reducing workload while keeping safety.
