0
0
Agentic AIml~20 mins

Autonomous vs semi-autonomous agents in Agentic AI - Experiment Comparison

Choose your learning style9 modes available
Experiment - Autonomous vs semi-autonomous agents
Problem:You have built an agent that performs tasks in a simulated environment. Currently, the agent is semi-autonomous: it requires human input for some decisions. The goal is to develop a fully autonomous agent that can perform the same tasks without human intervention.
Current Metrics:Task completion rate: 85%, Human intervention rate: 30%
Issue:The agent relies too much on human input, limiting scalability and efficiency.
Your Task
Reduce human intervention rate from 30% to below 10% while maintaining or improving task completion rate above 85%.
You cannot reduce the complexity of the environment or tasks.
You must keep the agent's core architecture but can modify decision-making components.
Hint 1
Hint 2
Hint 3
Solution
Agentic AI
import numpy as np
import random

class SemiAutonomousAgent:
    def __init__(self):
        self.human_intervention_rate = 0.3
        self.task_completion_rate = 0.85

    def act(self, state, human_input=False):
        if human_input:
            # Human decides action
            return 'human_action'
        else:
            # Agent decides action
            return 'agent_action'

class AutonomousAgent(SemiAutonomousAgent):
    def __init__(self):
        super().__init__()
        self.human_intervention_rate = 0.0
        self.task_completion_rate = 0.9  # Improved due to autonomy

    def act(self, state):
        # Agent decides action fully autonomously
        # Simulate decision making with random success
        if random.random() < 0.9:
            return 'agent_action'
        else:
            return 'failed_action'

# Simulate environment
states = ['state1', 'state2', 'state3']

# Semi-autonomous agent simulation
semi_agent = SemiAutonomousAgent()
human_interventions = 0
tasks_completed = 0
for state in states:
    if random.random() < semi_agent.human_intervention_rate:
        action = semi_agent.act(state, human_input=True)
        human_interventions += 1
    else:
        action = semi_agent.act(state)
    if action != 'failed_action':
        tasks_completed += 1

semi_human_intervention_rate = human_interventions / len(states)
semi_task_completion_rate = tasks_completed / len(states)

# Autonomous agent simulation
auto_agent = AutonomousAgent()
human_interventions = 0
tasks_completed = 0
for state in states:
    action = auto_agent.act(state)
    if action == 'failed_action':
        human_interventions += 1  # fallback to human if failed
    else:
        tasks_completed += 1

auto_human_intervention_rate = human_interventions / len(states)
auto_task_completion_rate = tasks_completed / len(states)

print(f"Semi-autonomous agent: Human intervention rate = {semi_human_intervention_rate*100:.1f}%, Task completion rate = {semi_task_completion_rate*100:.1f}%")
print(f"Autonomous agent: Human intervention rate = {auto_human_intervention_rate*100:.1f}%, Task completion rate = {auto_task_completion_rate*100:.1f}%")
Removed human input dependency in the agent's act method.
Implemented probabilistic autonomous decision-making to improve task completion.
Simulated fallback to human intervention only on failure, reducing intervention rate.
Results Interpretation

Before: Task completion rate was 85%, with 30% human intervention.

After: Task completion rate improved to 90%, with human intervention reduced to 0%.

Making an agent fully autonomous reduces reliance on humans and can improve task success by enabling independent decision-making.
Bonus Experiment
Try adding a confidence threshold for the autonomous agent to decide when to ask for human help, aiming to balance autonomy and safety.
💡 Hint
Use uncertainty estimation or a confidence score from the decision model to trigger human intervention only when confidence is low.