Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to generate text using a simple language model.
Prompt Engineering / GenAI
output = model.generate([1]) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing input_text directly without specifying max_length causes an error.
Using temperature or num_return_sequences alone does not define output length.
✗ Incorrect
The generate function requires the max_length parameter to specify how many tokens to generate.
2fill in blank
mediumComplete the code to tokenize input text before generation.
Prompt Engineering / GenAI
inputs = tokenizer([1], return_tensors='pt')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing max_length or temperature instead of the text string.
Forgetting to pass the input text causes an error.
✗ Incorrect
The tokenizer needs the actual input text string to convert it into tokens.
3fill in blank
hardFix the error in the code to decode generated tokens correctly.
Prompt Engineering / GenAI
generated_text = tokenizer.decode([1], skip_special_tokens=True)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Trying to decode the input tokens or raw output object instead of generated token IDs.
Passing the wrong variable causes a type error.
✗ Incorrect
The decode function needs the generated token IDs, which are stored in generated_ids.
4fill in blank
hardFill both blanks to generate text with temperature and return multiple sequences.
Prompt Engineering / GenAI
outputs = model.generate(inputs.input_ids, [1], [2])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Forgetting to set do_sample=True disables temperature effect.
Not setting num_return_sequences returns only one sequence.
✗ Incorrect
do_sample=True enables sampling with temperature, and num_return_sequences=3 returns multiple outputs.
5fill in blank
hardFill all three blanks to create a dictionary comprehension filtering tokens by length and decoding.
Prompt Engineering / GenAI
result = {token: tokenizer.decode(token) for token in outputs if len(token) [1] [2] and token != [3] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using < or <= instead of > changes the filter logic.
Comparing token to 0 instead of empty string causes errors.
✗ Incorrect
We filter tokens longer than 5 and exclude empty strings ('').