0
0
Prompt Engineering / GenAIml~10 mins

Temperature and sampling parameters in Prompt Engineering / GenAI - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to set the temperature parameter for text generation.

Prompt Engineering / GenAI
output = model.generate(input_text, temperature=[1])
Drag options to blanks, or click blank then click option'
A2.0
B1.5
C-0.5
D0.7
Attempts:
3 left
💡 Hint
Common Mistakes
Using negative temperature values causes errors.
Setting temperature too high leads to nonsensical output.
2fill in blank
medium

Complete the code to set the top_p parameter for nucleus sampling.

Prompt Engineering / GenAI
output = model.generate(input_text, top_p=[1])
Drag options to blanks, or click blank then click option'
A0.9
B0.1
C1.5
D-0.2
Attempts:
3 left
💡 Hint
Common Mistakes
Using values greater than 1 for top_p.
Using negative values for top_p.
3fill in blank
hard

Fix the error in the code by choosing the correct temperature value.

Prompt Engineering / GenAI
output = model.generate(input_text, temperature=[1])
Drag options to blanks, or click blank then click option'
A0
B1.2
C0.5
D-1
Attempts:
3 left
💡 Hint
Common Mistakes
Using zero or negative temperature causes the model to fail.
Using values above 1 can cause too random output.
4fill in blank
hard

Fill both blanks to set temperature and top_p for balanced sampling.

Prompt Engineering / GenAI
output = model.generate(input_text, temperature=[1], top_p=[2])
Drag options to blanks, or click blank then click option'
A0.8
B0.95
C0.5
D1.1
Attempts:
3 left
💡 Hint
Common Mistakes
Setting top_p above 1 causes errors.
Using temperature too low makes output boring.
5fill in blank
hard

Fill all three blanks to set temperature, top_p, and max_tokens for generation.

Prompt Engineering / GenAI
output = model.generate(input_text, temperature=[1], top_p=[2], max_tokens=[3])
Drag options to blanks, or click blank then click option'
A0.6
B0.9
C50
D100
Attempts:
3 left
💡 Hint
Common Mistakes
Using max_tokens too low cuts output short.
Setting temperature or top_p outside 0-1 range causes errors.