0
0
NLPml~10 mins

T5 for text-to-text tasks in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load the T5 tokenizer.

NLP
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('[1]')
Drag options to blanks, or click blank then click option'
Abert-base-uncased
Bgpt2
Ct5-small
Droberta-base
Attempts:
3 left
💡 Hint
Common Mistakes
Using tokenizer names from other models like 'bert-base-uncased' causes errors.
Misspelling the model name.
2fill in blank
medium

Complete the code to prepare input text for the T5 model.

NLP
input_text = "translate English to German: The house is wonderful."
inputs = tokenizer('[1]', return_tensors='pt')
Drag options to blanks, or click blank then click option'
Aoutput_text
Binput_text
CThe house is wonderful.
Dtranslate English to German
Attempts:
3 left
💡 Hint
Common Mistakes
Passing raw strings instead of the variable.
Using output text instead of input text.
3fill in blank
hard

Fix the error in generating output tokens from the model.

NLP
outputs = model.generate([1].input_ids)
Drag options to blanks, or click blank then click option'
Ainputs
Binput_text
Ctokenizer
Doutputs
Attempts:
3 left
💡 Hint
Common Mistakes
Passing raw text instead of token IDs.
Using the tokenizer object instead of inputs.
4fill in blank
hard

Fill both blanks to decode the output tokens into text.

NLP
decoded_output = tokenizer.[1](outputs[0], skip_special_tokens=[2])
Drag options to blanks, or click blank then click option'
Adecode
Bencode
CTrue
DFalse
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'encode' instead of 'decode'.
Setting 'skip_special_tokens' to False causing extra tokens in output.
5fill in blank
hard

Fill all three blanks to complete the T5 translation pipeline.

NLP
from transformers import T5ForConditionalGeneration, T5Tokenizer

tokenizer = T5Tokenizer.from_pretrained('[1]')
model = T5ForConditionalGeneration.from_pretrained('[2]')
input_text = "translate English to French: I love machine learning."
inputs = tokenizer(input_text, return_tensors='pt')
outputs = model.generate([3].input_ids)
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
Drag options to blanks, or click blank then click option'
At5-small
Bbert-base-uncased
Cinputs
Dgpt2
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing model names like 'bert-base-uncased' or 'gpt2' with T5 code.
Passing raw text instead of token IDs to model.generate.