0
0
Prompt Engineering / GenAIml~20 mins

Why agents make autonomous decisions in Prompt Engineering / GenAI - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Autonomous Agent Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why do autonomous agents need to make decisions without human input?

Imagine a robot vacuum cleaner working in your home. Why does it need to decide on its own where to clean next instead of waiting for you to tell it?

ABecause it cannot communicate with humans at all.
BBecause it wants to avoid humans and work secretly.
CBecause it is programmed to ignore human commands.
DBecause it can respond faster to changes in the environment without waiting for instructions.
Attempts:
2 left
💡 Hint

Think about how waiting for human instructions might slow down the robot's work.

Model Choice
intermediate
2:00remaining
Choosing the right model for autonomous decision-making

You want to build an agent that can decide the best route to deliver packages in a city with changing traffic. Which model type is best suited for this task?

AA reinforcement learning model that learns from trial and error in the environment.
BA clustering model that groups delivery locations.
CA simple rule-based system with fixed instructions.
DA supervised learning model trained on fixed routes only.
Attempts:
2 left
💡 Hint

Consider which model can adapt by learning from experience in a changing environment.

Metrics
advanced
2:00remaining
Evaluating autonomous agent decision quality

An autonomous agent is tested on how well it completes tasks without human help. Which metric best measures how often it makes the correct decision?

ALatency - the time taken to make a decision.
BLoss - the error between predicted and actual outcomes.
CAccuracy - the percentage of correct decisions made by the agent.
DThroughput - the number of tasks completed per minute.
Attempts:
2 left
💡 Hint

Think about which metric directly shows how often the agent's decisions are right.

🔧 Debug
advanced
2:00remaining
Why does this autonomous agent fail to learn?

Consider this simplified reinforcement learning code snippet for an agent:

rewards = [1, -1, 1, 1]
actions = ["left", "right", "left", "left"]

for i in range(len(actions)):
    if rewards[i] > 0:
        policy = actions[i]
print(policy)

Why does this code fail to learn the best action?

ABecause it overwrites the policy every time instead of accumulating knowledge.
BBecause the rewards list has negative values which are not allowed.
CBecause the loop does not run due to incorrect range.
DBecause the actions list contains strings instead of numbers.
Attempts:
2 left
💡 Hint

Look at how the variable policy changes inside the loop.

🧠 Conceptual
expert
2:00remaining
Why do autonomous agents balance exploration and exploitation?

In autonomous decision-making, agents often must choose between trying new actions (exploration) and using known good actions (exploitation). Why is this balance important?

ABecause only exploring new actions guarantees the best long-term results.
BBecause balancing both helps the agent find better options while using what it already knows.
CBecause only exploiting known actions avoids any risk of failure.
DBecause agents cannot remember past actions, so they randomly choose.
Attempts:
2 left
💡 Hint

Think about how trying new things and using what works can both help an agent improve.