0
0
PyTorchml~10 mins

BERT for text classification in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load the BERT tokenizer.

PyTorch
from transformers import [1]
tokenizer = [1].from_pretrained('bert-base-uncased')
Drag options to blanks, or click blank then click option'
ABertTokenizer
BBertModel
CAutoTokenizer
DAutoModel
Attempts:
3 left
💡 Hint
Common Mistakes
Using BertModel instead of a tokenizer.
Using BertTokenizer which is correct but less flexible than AutoTokenizer.
2fill in blank
medium

Complete the code to tokenize input text for BERT.

PyTorch
inputs = tokenizer('[1]', return_tensors='pt', padding=True, truncation=True)
Drag options to blanks, or click blank then click option'
ABERT is great for NLP.
BHello, how are you?
CI love machine learning.
DThis is a test sentence.
Attempts:
3 left
💡 Hint
Common Mistakes
Not passing a string to the tokenizer.
Passing a list instead of a string.
3fill in blank
hard

Fix the error in the code to get BERT's pooled output for classification.

PyTorch
outputs = model(**inputs)
pooled_output = outputs.[1]
Drag options to blanks, or click blank then click option'
Apooler_output
Blast_hidden_state
Chidden_states
Dlogits
Attempts:
3 left
💡 Hint
Common Mistakes
Using last_hidden_state which is the sequence output, not pooled.
Trying to access logits directly from model output without a classification head.
4fill in blank
hard

Fill both blanks to define a simple classifier head on top of BERT's pooled output.

PyTorch
import torch.nn as nn

class BertClassifier(nn.Module):
    def __init__(self, bert_model):
        super().__init__()
        self.bert = bert_model
        self.classifier = nn.[1](bert_model.config.hidden_size, [2])

    def forward(self, input_ids, attention_mask):
        outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
        pooled_output = outputs.pooler_output
        return self.classifier(pooled_output)
Drag options to blanks, or click blank then click option'
ALinear
BReLU
C2
DDropout
Attempts:
3 left
💡 Hint
Common Mistakes
Using activation functions like ReLU or Dropout as the classifier layer.
Setting the output size incorrectly.
5fill in blank
hard

Fill all three blanks to compute the accuracy metric after predictions.

PyTorch
import torch

preds = torch.argmax(logits, dim=[1])
correct = (preds == labels).sum().item()
accuracy = correct / [2]
print(f'Accuracy: {accuracy:.2f}')

# Assuming labels is a tensor of size [3]
Drag options to blanks, or click blank then click option'
A1
B0
Clen(labels)
Dlabels.size(0)
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong dimension in argmax.
Dividing by len(labels) which is not a tensor method.
Using incorrect batch size calculation.