0
0
NLPml~20 mins

RoBERTa and DistilBERT in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
RoBERTa and DistilBERT Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Difference in Pretraining Objectives

Which of the following best describes the main difference in the pretraining objectives between RoBERTa and DistilBERT?

ARoBERTa is trained only on next sentence prediction, while DistilBERT uses masked language modeling.
BRoBERTa uses dynamic masking with a masked language model objective, while DistilBERT uses a distillation loss to mimic BERT's outputs.
CRoBERTa uses autoregressive language modeling, while DistilBERT uses masked language modeling.
DRoBERTa uses a sequence-to-sequence objective, while DistilBERT uses a classification objective.
Attempts:
2 left
💡 Hint

Think about how RoBERTa improved BERT's training and how DistilBERT reduces model size.

Predict Output
intermediate
2:00remaining
Output Shape of RoBERTa Model

Given the following code snippet using Hugging Face Transformers, what is the shape of the last_hidden_state tensor?

NLP
from transformers import RobertaModel, RobertaTokenizer
import torch

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaModel.from_pretrained('roberta-base')
inputs = tokenizer('Hello world!', return_tensors='pt')
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
print(last_hidden_state.shape)
Atorch.Size([1, 5, 768])
Btorch.Size([1, 3, 768])
Ctorch.Size([1, 4, 768])
Dtorch.Size([1, 4, 512])
Attempts:
2 left
💡 Hint

Count the tokens after tokenization including special tokens.

Model Choice
advanced
2:00remaining
Choosing a Model for Low-Latency Applications

You want to deploy a transformer model for real-time text classification on a mobile device with limited memory and CPU. Which model is the best choice?

ADistilBERT-base
BRoBERTa-base
CBERT-large
DRoBERTa-large
Attempts:
2 left
💡 Hint

Consider model size and speed for mobile deployment.

Hyperparameter
advanced
2:00remaining
Effect of Sequence Length on RoBERTa Training

When fine-tuning RoBERTa on a text classification task, increasing the maximum sequence length from 128 to 512 will most likely:

AHave no effect on training time or accuracy.
BDecrease training time because longer sequences are processed faster.
CReduce memory usage by truncating sequences.
DIncrease training time and memory usage but may improve accuracy on longer texts.
Attempts:
2 left
💡 Hint

Think about how sequence length affects computation in transformers.

Metrics
expert
2:00remaining
Comparing Model Performance Metrics

You fine-tune both RoBERTa-base and DistilBERT-base on the same sentiment analysis dataset. After evaluation, you get these results:

  • RoBERTa-base: Accuracy=0.92, F1-score=0.91, Inference time=120ms
  • DistilBERT-base: Accuracy=0.89, F1-score=0.88, Inference time=70ms

Which statement best summarizes the trade-off between these models?

ARoBERTa-base is more accurate but slower; DistilBERT is faster but slightly less accurate.
BDistilBERT is both more accurate and faster than RoBERTa-base.
CRoBERTa-base is faster and more accurate than DistilBERT.
DBoth models have the same speed and accuracy.
Attempts:
2 left
💡 Hint

Look at both accuracy and inference time values.