0
0
NLPml~10 mins

Why different transformers serve different tasks in NLP - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load a transformer model for text classification.

NLP
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained([1])
Drag options to blanks, or click blank then click option'
A"bert-base-uncased"
B"gpt2"
C"t5-small"
D"roberta-base"
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing a model designed for text generation instead of classification.
2fill in blank
medium

Complete the code to tokenize input text for a transformer model.

NLP
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained([1])
inputs = tokenizer("Hello world!", return_tensors="pt")
Drag options to blanks, or click blank then click option'
A"gpt2"
B"bert-base-uncased"
C"t5-small"
D"distilbert-base-uncased"
Attempts:
3 left
💡 Hint
Common Mistakes
Using a tokenizer that does not match the model architecture.
3fill in blank
hard

Fix the error in the code to generate text using a transformer model.

NLP
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained([1])
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Drag options to blanks, or click blank then click option'
A"gpt2"
B"roberta-base"
C"t5-small"
D"bert-base-uncased"
Attempts:
3 left
💡 Hint
Common Mistakes
Using a model designed for classification instead of generation.
4fill in blank
hard

Fill both blanks to create a transformer model and tokenizer for translation.

NLP
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained([1])
tokenizer = AutoTokenizer.from_pretrained([2])
inputs = tokenizer("Translate this sentence.", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Drag options to blanks, or click blank then click option'
A"t5-small"
B"bert-base-uncased"
D"gpt2"
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing model and tokenizer names from different architectures.
5fill in blank
hard

Fill all three blanks to create a dictionary comprehension that filters tokens by length and converts them to uppercase.

NLP
tokens = ["hello", "to", "world", "a", "transformer"]
filtered = [1]: [2] for token in tokens if len(token) [3] 3
Drag options to blanks, or click blank then click option'
Atoken.upper()
Btoken
C>
D<
Attempts:
3 left
💡 Hint
Common Mistakes
Using the wrong comparison operator or swapping key and value.