Challenge - 5 Problems
T5 Text-to-Text Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate2:00remaining
What is the main advantage of T5's text-to-text framework?
T5 treats every problem as a text-to-text task. What is the main advantage of this approach?
Attempts:
2 left
💡 Hint
Think about how treating all tasks as text input and output simplifies the model design.
✗ Incorrect
T5's text-to-text framework means the same model can be used for translation, summarization, classification, and more by just changing the input and output text format.
❓ Predict Output
intermediate2:00remaining
Output of T5 model generating summary
Given the input text: "summarize: The cat sat on the mat and looked outside." What is the most likely output of a T5 model fine-tuned for summarization?
NLP
from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small') input_text = 'summarize: The cat sat on the mat and looked outside.' input_ids = tokenizer(input_text, return_tensors='pt').input_ids outputs = model.generate(input_ids, max_length=10) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Attempts:
2 left
💡 Hint
Summarization usually shortens and keeps the main idea.
✗ Incorrect
The T5 model fine-tuned for summarization will produce a shorter sentence capturing the main idea, here "The cat looked outside." is the best summary.
❓ Hyperparameter
advanced2:00remaining
Choosing max_length for T5 text generation
When generating text with T5, which max_length value is best to avoid cutting off important output while keeping generation efficient?
Attempts:
2 left
💡 Hint
Think about balancing output completeness and speed.
✗ Incorrect
Choosing max_length based on expected output size avoids cutting off important text and keeps generation efficient.
❓ Metrics
advanced2:00remaining
Evaluating T5 model performance on summarization
Which metric is most appropriate to evaluate the quality of summaries generated by a T5 model?
Attempts:
2 left
💡 Hint
Think about metrics that compare generated text to reference text.
✗ Incorrect
BLEU score is commonly used to evaluate text generation tasks like summarization by measuring n-gram overlap.
🔧 Debug
expert3:00remaining
Why does T5 generate repetitive text?
You fine-tuned a T5 model for text generation, but the output repeats the same phrase multiple times. What is the most likely cause?
Attempts:
2 left
💡 Hint
Consider how decoding methods affect output variety.
✗ Incorrect
Greedy decoding or beam search without diversity penalties can cause repetitive outputs in text generation.