0
0
Prompt Engineering / GenAIml~20 mins

Temperature and sampling parameters in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Sampling Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Effect of Temperature on Model Output Diversity

When using a language model with a temperature parameter, what happens to the model's output as the temperature increases from 0.1 to 1.0?

AThe output becomes more random and diverse, allowing more creative responses.
BThe output becomes more deterministic and repetitive, reducing creativity.
CThe model stops generating any output because the temperature is too high.
DThe output length increases automatically with higher temperature.
Attempts:
2 left
💡 Hint

Think about how temperature controls randomness in choosing words.

Predict Output
intermediate
2:00remaining
Sampling with Top-k Parameter

Given a language model sampling with top-k=3, which option best describes the effect on word selection?

AThe model samples from all words with probability above 0.3.
BThe model always picks the single most probable word.
CThe model only considers the top 3 most probable next words for sampling.
DThe model ignores the top 3 words and samples from the rest.
Attempts:
2 left
💡 Hint

Top-k limits the candidate words to a fixed number.

Hyperparameter
advanced
2:00remaining
Choosing Temperature for Balanced Creativity

You want your language model to generate responses that are creative but still coherent. Which temperature value is most suitable?

A3.0
B0.5
C1.5
D0.01
Attempts:
2 left
💡 Hint

Very low temperatures make output boring; very high make it chaotic.

Metrics
advanced
2:00remaining
Impact of Sampling Parameters on Perplexity

How does increasing temperature during sampling affect the perplexity metric measured on generated text?

APerplexity generally increases because outputs become less predictable.
BPerplexity decreases because outputs become more accurate.
CPerplexity remains unchanged as it depends only on training data.
DPerplexity becomes zero because the model guesses perfectly.
Attempts:
2 left
💡 Hint

Higher randomness means less predictable text.

🔧 Debug
expert
3:00remaining
Unexpected Output with Temperature and Top-p Sampling

Consider this code snippet using a language model with temperature=0.8 and top-p=0.9. The output is unexpectedly repetitive and lacks diversity. Which is the most likely cause?

Prompt Engineering / GenAI
model.generate(prompt, temperature=0.8, top_p=0.9, top_k=0)
AThe model is ignoring temperature when top-p is set.
BTemperature 0.8 is too low to allow any randomness.
CTop-k is set to 0, which disables top-k filtering, causing less diversity.
DTop-p 0.9 is too restrictive, limiting word choices too much.
Attempts:
2 left
💡 Hint

Top-p controls cumulative probability cutoff for candidate words.