Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load a pre-trained language model for content writing assistance.
Prompt Engineering / GenAI
from transformers import [1] model = [1].from_pretrained('gpt2')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using GPT2Tokenizer instead of GPT2LMHeadModel
Using BertModel which is a different architecture
✗ Incorrect
We use GPT2LMHeadModel to load the GPT-2 model with language modeling head for text generation.
2fill in blank
mediumComplete the code to tokenize input text for the model.
Prompt Engineering / GenAI
from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') inputs = tokenizer('[1]', return_tensors='pt')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing variable names instead of string literals
Using empty strings or non-text inputs
✗ Incorrect
We provide a sample input text like 'Hello, how are you?' to tokenize for the model.
3fill in blank
hardFix the error in the code to generate text from the model.
Prompt Engineering / GenAI
outputs = model.generate([1]['input_ids'], max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using undefined variable names like 'input' or 'tokens'
Passing raw text instead of token ids
✗ Incorrect
The tokenized inputs are stored in 'inputs', so we use inputs['input_ids'] to generate text.
4fill in blank
hardFill both blanks to create a function that generates content given a prompt.
Prompt Engineering / GenAI
def generate_content(prompt): inputs = tokenizer([1], return_tensors='pt') outputs = model.generate(inputs[[2]], max_length=100) return tokenizer.decode(outputs[0], skip_special_tokens=True)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'text' instead of 'prompt' as input
Using 'attention_mask' for generation input
✗ Incorrect
We pass the prompt string to tokenizer and use 'input_ids' key for generation.
5fill in blank
hardFill all three blanks to create a pipeline for content writing assistance.
Prompt Engineering / GenAI
from transformers import pipeline content_generator = pipeline('[1]', model='gpt2') result = content_generator('[2]', max_length=[3]) print(result[0]['generated_text'])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'text-classification' instead of 'text-generation'
Passing numeric prompt instead of string
Setting max_length too low or missing
✗ Incorrect
We use 'text-generation' pipeline with a prompt and max_length for output length.