Complete the code to load a transformer model for text classification.
from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained([1])
The bert-base-uncased model is commonly used for sequence classification tasks.
Complete the code to tokenize input text for a transformer model.
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained([1]) inputs = tokenizer("Hello world!", return_tensors="pt")
The tokenizer must match the model. Here, bert-base-uncased tokenizer is used.
Fix the error in the code to generate text using a transformer model.
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained([1]) tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer("Hello, how are you?", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The gpt2 model is a causal language model suitable for text generation.
Fill both blanks to create a transformer model and tokenizer for translation.
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained([1]) tokenizer = AutoTokenizer.from_pretrained([2]) inputs = tokenizer("Translate this sentence.", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The t5-small model and tokenizer are used for sequence-to-sequence tasks like translation.
Fill all three blanks to create a dictionary comprehension that filters tokens by length and converts them to uppercase.
tokens = ["hello", "to", "world", "a", "transformer"] filtered = [1]: [2] for token in tokens if len(token) [3] 3
This comprehension creates a dictionary with keys as uppercase tokens and values as original tokens, only for tokens longer than 3 characters.