Complete the code to load a pre-trained BERT tokenizer from Hugging Face.
from transformers import [1] tokenizer = [1].from_pretrained('bert-base-uncased')
The BertTokenizer class loads the tokenizer for BERT models. It converts text into tokens that the model understands.
Complete the code to tokenize a sentence using the tokenizer.
sentence = "Hello, how are you?" tokens = tokenizer.[1](sentence, return_tensors='pt')
The encode_plus method tokenizes the input and returns tensors ready for the model, including attention masks.
Fix the error in the code to load a pre-trained BERT model for sequence classification.
from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained([1])
The correct model name is bert-base-uncased. It is a standard pre-trained BERT model without case sensitivity.
Fill both blanks to prepare inputs and get model outputs.
inputs = tokenizer([1], return_tensors=[2]) outputs = model(**inputs)
We pass a string sentence to the tokenizer and specify 'pt' to get PyTorch tensors for the model.
Fill all three blanks to extract predicted class from model logits.
import torch logits = outputs.logits predicted_class = torch.[1](logits, dim=[2]).[3]()
We use argmax over dimension 1 to get the predicted class index, then item() to get the Python number.