0
0
Prompt Engineering / GenAIml~10 mins

Code generation in Prompt Engineering / GenAI - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to generate text using a simple language model.

Prompt Engineering / GenAI
output = model.generate([1])
Drag options to blanks, or click blank then click option'
Ainput_text
Bmax_length=50
Ctemperature=0.7
Dnum_return_sequences=1
Attempts:
3 left
💡 Hint
Common Mistakes
Passing input_text directly without specifying max_length causes an error.
Using temperature or num_return_sequences alone does not define output length.
2fill in blank
medium

Complete the code to tokenize input text before generation.

Prompt Engineering / GenAI
inputs = tokenizer([1], return_tensors='pt')
Drag options to blanks, or click blank then click option'
Ainput_text
Bmax_length=50
Ctemperature=0.7
Dpadding=True
Attempts:
3 left
💡 Hint
Common Mistakes
Passing max_length or temperature instead of the text string.
Forgetting to pass the input text causes an error.
3fill in blank
hard

Fix the error in the code to decode generated tokens correctly.

Prompt Engineering / GenAI
generated_text = tokenizer.decode([1], skip_special_tokens=True)
Drag options to blanks, or click blank then click option'
Aoutput
Binputs
Cinput_text
Dgenerated_ids
Attempts:
3 left
💡 Hint
Common Mistakes
Trying to decode the input tokens or raw output object instead of generated token IDs.
Passing the wrong variable causes a type error.
4fill in blank
hard

Fill both blanks to generate text with temperature and return multiple sequences.

Prompt Engineering / GenAI
outputs = model.generate(inputs.input_ids, [1], [2])
Drag options to blanks, or click blank then click option'
Atemperature=0.9
Bmax_length=100
Cnum_return_sequences=3
Ddo_sample=True
Attempts:
3 left
💡 Hint
Common Mistakes
Forgetting to set do_sample=True disables temperature effect.
Not setting num_return_sequences returns only one sequence.
5fill in blank
hard

Fill all three blanks to create a dictionary comprehension filtering tokens by length and decoding.

Prompt Engineering / GenAI
result = {token: tokenizer.decode(token) for token in outputs if len(token) [1] [2] and token != [3]
Drag options to blanks, or click blank then click option'
A>
B5
C0
D''
Attempts:
3 left
💡 Hint
Common Mistakes
Using < or <= instead of > changes the filter logic.
Comparing token to 0 instead of empty string causes errors.