Challenge - 5 Problems
Transformer Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
What is the output shape of the model's last hidden state?
Given the following code using Hugging Face Transformers, what is the shape of the last hidden state tensor?
NLP
from transformers import BertModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') inputs = tokenizer('Hello', return_tensors='pt') outputs = model(**inputs) last_hidden_state = outputs.last_hidden_state print(last_hidden_state.shape)
Attempts:
2 left
💡 Hint
Remember that BERT outputs batch size, sequence length, and hidden size dimensions.
✗ Incorrect
The last_hidden_state tensor shape is (batch_size, sequence_length, hidden_size). Here batch size is 1, sequence length is 3 tokens (including special tokens), and hidden size is 768.
❓ Model Choice
intermediate1:30remaining
Which model is best suited for sequence classification tasks?
You want to classify movie reviews as positive or negative using Hugging Face Transformers. Which model class should you use?
Attempts:
2 left
💡 Hint
Look for the model class designed specifically for classification tasks.
✗ Incorrect
BertForSequenceClassification adds a classification head on top of BERT for tasks like sentiment analysis.
❓ Hyperparameter
advanced1:30remaining
Which hyperparameter controls the learning rate in Hugging Face Trainer?
When fine-tuning a transformer model using the Hugging Face Trainer API, which argument sets the learning rate?
Attempts:
2 left
💡 Hint
This hyperparameter directly affects how fast the model updates weights.
✗ Incorrect
The 'learning_rate' parameter sets the step size for weight updates during training.
❓ Metrics
advanced1:30remaining
Which metric is most appropriate for evaluating a multi-class text classification model?
You trained a transformer model to classify news articles into 5 categories. Which metric should you use to evaluate its performance?
Attempts:
2 left
💡 Hint
Choose a metric that measures correct label predictions.
✗ Incorrect
Accuracy measures the proportion of correct predictions in classification tasks.
🔧 Debug
expert2:00remaining
What error does this code raise when loading a tokenizer?
Consider this code snippet:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('nonexistent-model')
Attempts:
2 left
💡 Hint
Check what happens when you try to load a model that does not exist on Hugging Face hub.
✗ Incorrect
Trying to load a model that does not exist raises an OSError indicating the model was not found.