Challenge - 5 Problems
Master of Tracing Agent Reasoning Chains
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
What is the output of this agent reasoning chain?
Consider an agent that processes input through three reasoning steps, each appending a letter to a string. The initial input is an empty string.
Step 1 appends 'A', Step 2 appends 'B', Step 3 appends 'C'. What is the final output string?
Step 1 appends 'A', Step 2 appends 'B', Step 3 appends 'C'. What is the final output string?
Agentic AI
def agent_chain(input_str): step1 = input_str + 'A' step2 = step1 + 'B' step3 = step2 + 'C' return step3 output = agent_chain('') print(output)
Attempts:
2 left
💡 Hint
Trace each step carefully, adding letters in order.
✗ Incorrect
The agent starts with an empty string, then adds 'A', then 'B', then 'C', resulting in 'ABC'.
❓ Model Choice
intermediate2:00remaining
Which model architecture best supports tracing agent reasoning chains?
You want to build an AI agent that can explain its reasoning step-by-step. Which model architecture is best suited for this task?
Attempts:
2 left
💡 Hint
Think about models that handle sequences and context.
✗ Incorrect
RNNs with attention can process sequences and keep track of previous steps, enabling stepwise reasoning explanations.
❓ Hyperparameter
advanced2:00remaining
Which hyperparameter adjustment improves agent reasoning chain clarity?
You have a transformer-based agent that generates reasoning chains. You want to improve the clarity and coherence of each reasoning step. Which hyperparameter change is most effective?
Attempts:
2 left
💡 Hint
More attention heads help the model focus on different parts of the input simultaneously.
✗ Incorrect
Increasing attention heads allows the model to capture more nuanced relationships in the reasoning chain, improving clarity.
❓ Metrics
advanced2:00remaining
Which metric best evaluates the quality of agent reasoning chains?
You want to measure how well an AI agent explains its reasoning step-by-step. Which metric is most appropriate?
Attempts:
2 left
💡 Hint
Think about metrics that compare generated text to reference text.
✗ Incorrect
BLEU score measures similarity between generated and reference text, suitable for evaluating reasoning explanations.
🔧 Debug
expert2:00remaining
What error does this agent reasoning chain code raise?
Examine the following code snippet for an agent reasoning chain. What error occurs when running it?
Agentic AI
def reasoning_chain(input_list): result = [] for i in range(len(input_list)): step = input_list[i] + i result.append(step) return result output = reasoning_chain(['a', 'b', 'c']) print(output)
Attempts:
2 left
💡 Hint
Check the operation combining string and integer types.
✗ Incorrect
The code tries to add a string and an integer directly, causing a TypeError.
