0
0
Agentic_aiml~20 mins

Human approval workflows in Agentic Ai - Practice Problems & Coding Challenges

Choose your learning style8 modes available
Challenge - 5 Problems
🎖️
Human Approval Workflow Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 conceptual
intermediate
1:30remaining
What is the main purpose of a human approval workflow in AI systems?

Human approval workflows are often integrated into AI systems. What is their primary role?

ATo automatically retrain AI models without human input
BTo replace AI models with human decision-making entirely
CTo allow humans to review and approve AI decisions before final action is taken
DTo speed up AI predictions by skipping human checks
Attempts:
2 left
model choice
intermediate
1:30remaining
Which AI model type best supports human approval workflows for text classification?

You want an AI model that can provide confidence scores to help humans decide when to approve or review text classification results. Which model type is best?

ADeterministic rule-based model without confidence scores
BUnsupervised clustering model without labels
CGenerative model that creates new text samples
DProbabilistic model that outputs confidence probabilities
Attempts:
2 left
metrics
advanced
2:00remaining
How does adding a human approval step affect the overall accuracy metric of an AI system?

An AI system has 85% accuracy alone. After adding a human approval step that corrects 10% of AI errors, what is the new effective accuracy?

A90% because human approval fixes some AI mistakes
B85% because human approval does not change AI accuracy
C75% because human approval slows down the system
D95% because human approval corrects all AI errors
Attempts:
2 left
🔧 debug
advanced
2:00remaining
Identify the error in this human approval workflow code snippet

Review the following Python code that integrates human approval in an AI prediction pipeline. What error will it cause?

Agentic_ai
def ai_predict(input_data):
    # returns prediction and confidence
    return {'label': 'spam', 'confidence': 0.7}

def human_approval(prediction):
    if prediction['confidence'] < 0.8:
        return input('Approve prediction? (yes/no): ')
    else:
        return 'yes'

result = ai_predict('email text')
approved = human_approval(result)
if approved == 'yes':
    print('Final label:', result['label'])
else:
    print('Prediction rejected')
AThe input() function will cause a runtime error in non-interactive environments
BThe ai_predict function returns a list instead of a dictionary
CThe confidence key is missing in the prediction dictionary
DThe human_approval function always returns None
Attempts:
2 left
hyperparameter
expert
2:30remaining
Which hyperparameter adjustment best balances AI autonomy and human approval workload?

In a human approval workflow, the AI model outputs a confidence threshold to decide when to request human review. Which threshold setting best balances reducing human workload while maintaining safety?

ASet threshold very low (e.g., 0.1) to request human review on almost all predictions
BSet threshold very high (e.g., 0.95) to request human review only on very uncertain predictions
CSet threshold at 0.5 to request human review on half of predictions randomly
DDisable threshold and have humans review all predictions
Attempts:
2 left