Which statement best describes why agents are considered autonomous in AI?
Think about what autonomy means in everyday life, like a self-driving car making decisions on its own.
Autonomy means the agent can sense its environment and decide actions independently, which is why option D is correct.
You want to build an AI agent that can learn from experience and improve over time. Which model architecture is best suited?
Consider which model learns by trial and error and adapts its behavior.
Reinforcement learning allows agents to learn from interactions and improve, unlike fixed or static models.
Which metric is most appropriate to evaluate an AI agent's success in completing a sequence of tasks over time?
Think about how agents get feedback from their environment over time.
Cumulative reward reflects how well an agent performs tasks sequentially, which is key for agent evaluation.
Given this pseudocode for an agent's action selection, which bug causes the agent to always pick the first action?
actions = ['move', 'stop', 'turn']
selected_action = None
for action in actions:
if action == 'move':
selected_action = action
break
else:
selected_action = actions[0]Consider how the break statement affects loop execution.
The break exits the loop immediately after finding 'move', so the agent always picks 'move' first.
What is the output of this Python code simulating a simple agent decision process?
class Agent:
def __init__(self):
self.state = 0
def act(self, observation):
match observation:
case 'obstacle':
self.state += 1
return 'turn'
case 'clear':
self.state = 0
return 'move'
case _:
return 'stop'
agent = Agent()
outputs = []
inputs = ['clear', 'obstacle', 'obstacle', 'clear', 'unknown']
for inp in inputs:
outputs.append(agent.act(inp))
print(outputs)Trace the state changes and returned actions for each input.
The agent returns 'move' on 'clear', increments state and returns 'turn' on 'obstacle', resets state on 'clear', and returns 'stop' on unknown.
