0
0
Prompt Engineering / GenAIml~20 mins

Top-p and top-k sampling in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Top-p and Top-k Sampling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding Top-k Sampling
In top-k sampling, what does the parameter k control when generating text from a language model?
AThe maximum length of the generated text sequence
BThe number of highest probability tokens considered for sampling at each step
CThe temperature scaling factor applied to logits before sampling
DThe cumulative probability threshold to include tokens for sampling
Attempts:
2 left
💡 Hint
Think about how many tokens the model looks at before picking the next word.
🧠 Conceptual
intermediate
1:30remaining
Understanding Top-p (Nucleus) Sampling
What does the parameter p represent in top-p (nucleus) sampling?
AThe maximum number of tokens generated
BThe temperature value to adjust randomness
CThe fixed number of tokens to consider for sampling
DThe cumulative probability threshold to include tokens for sampling
Attempts:
2 left
💡 Hint
It relates to the total probability mass of tokens considered.
Predict Output
advanced
2:00remaining
Output of Top-k Sampling Code Snippet
What is the output of the following Python code simulating top-k sampling probabilities?
Prompt Engineering / GenAI
import numpy as np
np.random.seed(0)
logits = np.array([0.1, 0.2, 0.3, 0.4, 0.5])
k = 3
# Select top-k logits
indices = np.argsort(logits)[-k:]
topk_probs = np.exp(logits[indices]) / np.sum(np.exp(logits[indices]))
sampled_index = np.random.choice(indices, p=topk_probs)
print(sampled_index)
A3
B0
C4
D2
Attempts:
2 left
💡 Hint
Check which indices are top 3 and how probabilities are computed.
Metrics
advanced
1:30remaining
Effect of Top-p on Diversity Metrics
If you decrease the top-p value from 0.9 to 0.5 during text generation, what is the expected effect on the diversity of generated text?
ADiversity decreases because fewer tokens are considered
BDiversity increases because more tokens are considered
CDiversity remains the same because top-p does not affect token selection
DDiversity fluctuates randomly regardless of top-p
Attempts:
2 left
💡 Hint
Think about how cumulative probability threshold limits token choices.
🔧 Debug
expert
2:30remaining
Identifying Error in Top-k Sampling Implementation
Consider this code snippet for top-k sampling. Which option correctly identifies the error causing incorrect sampling?
Prompt Engineering / GenAI
import numpy as np
logits = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
k = 2
indices = np.argsort(logits)[:k]
topk_logits = logits[indices]
probs = np.exp(topk_logits) / np.sum(np.exp(topk_logits))
sampled_index = np.random.choice(indices, p=probs)
print(sampled_index)
AThe code selects the lowest k logits instead of the highest k logits
BThe softmax calculation is incorrect because it should use log probabilities
CThe sampling should be done over logits, not indices
DThe random seed is missing, causing non-reproducible results
Attempts:
2 left
💡 Hint
Check how argsort is used to select top-k logits.