0
0
Prompt Engineering / GenAIml~20 mins

Chain-of-thought prompting in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Chain-of-Thought Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
What is the main benefit of chain-of-thought prompting in AI models?

Chain-of-thought prompting helps AI models by:

AReducing the model's size to improve speed
BIncreasing the model's training data size automatically
CChanging the model architecture to add more layers
DBreaking down complex problems into smaller reasoning steps before answering
Attempts:
2 left
💡 Hint

Think about how explaining your thinking helps solve hard problems.

Predict Output
intermediate
1:30remaining
What is the output of this chain-of-thought prompt example?

Given the prompt below, what will the model most likely output?

Prompt Engineering / GenAI
Prompt: "If there are 3 apples and you get 2 more, how many apples do you have? Let's think step-by-step."
A"You start with 3 apples. You get 2 more apples. 3 + 2 = 5 apples."
B"Apples are red and tasty."
C"The answer is 2 apples."
D"You have 3 apples and 2 oranges, so 5 fruits."
Attempts:
2 left
💡 Hint

Chain-of-thought prompts encourage stepwise reasoning.

Model Choice
advanced
2:00remaining
Which model type benefits most from chain-of-thought prompting?

Among these AI model types, which one gains the most accuracy improvement from chain-of-thought prompting?

ASmall rule-based expert systems
BLarge language models with billions of parameters
CSimple linear regression models
DBasic decision trees with few splits
Attempts:
2 left
💡 Hint

Consider which models can generate detailed text explanations.

Hyperparameter
advanced
2:00remaining
Which hyperparameter setting best supports chain-of-thought prompting in text generation?

When using chain-of-thought prompting, which setting helps the model produce longer, detailed reasoning?

ASetting batch size to 1 for faster inference
BReducing learning rate to speed up training
CIncreasing max token length to allow longer outputs
DDecreasing temperature to zero for deterministic output
Attempts:
2 left
💡 Hint

Think about output length and detail in reasoning.

Metrics
expert
2:30remaining
Which metric best measures improvement from chain-of-thought prompting on reasoning tasks?

You want to evaluate if chain-of-thought prompting improves your model's reasoning. Which metric is most appropriate?

AAccuracy on multi-step reasoning benchmark datasets
BTraining loss decrease during fine-tuning
CInference speed measured in tokens per second
DModel size in number of parameters
Attempts:
2 left
💡 Hint

Focus on measuring reasoning correctness, not training or speed.