0
0
NLPml~10 mins

BERT tokenization (WordPiece) in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the BERT tokenizer from the transformers library.

NLP
from transformers import [1]
Drag options to blanks, or click blank then click option'
ATokenizer
BAutoTokenizer
CBertTokenizer
DBertModel
Attempts:
3 left
💡 Hint
Common Mistakes
Importing BertModel instead of BertTokenizer.
Using a generic Tokenizer class that does not exist.
Confusing AutoTokenizer with BertTokenizer.
2fill in blank
medium

Complete the code to load the pretrained BERT tokenizer for 'bert-base-uncased'.

NLP
tokenizer = BertTokenizer.[1]('bert-base-uncased')
Drag options to blanks, or click blank then click option'
Aload
Bfrom_pretrained
Cinit
Dtokenize
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'load' which is not a method of BertTokenizer.
Using 'init' which is a constructor, not for pretrained loading.
Using 'tokenize' which is for tokenizing text, not loading.
3fill in blank
hard

Fix the error in the code to tokenize the sentence using the tokenizer.

NLP
tokens = tokenizer.[1]('Hello, how are you?')
Drag options to blanks, or click blank then click option'
Atokenize
Bsplit
Cparse
Dencode
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'encode' which returns token IDs, not token strings.
Using 'split' which is a Python string method, not tokenizer method.
Using 'parse' which is not a tokenizer method.
4fill in blank
hard

Fill both blanks to create a dictionary of token ids and attention mask for the input text.

NLP
encoded_input = tokenizer('[1]', return_tensors='pt', padding=True, truncation=True)
input_ids = encoded_input['[2]']
Drag options to blanks, or click blank then click option'
AHello, how are you?
Binput_ids
Cattention_mask
Dtokens
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'tokens' instead of 'input_ids' as dictionary key.
Putting a variable name instead of a string in the first blank.
Using 'attention_mask' key when token IDs are needed.
5fill in blank
hard

Fill all three blanks to decode token ids back to the original text without special tokens.

NLP
decoded_text = tokenizer.[1](encoded_input['[2]'][0], skip_special_tokens=[3])
Drag options to blanks, or click blank then click option'
Adecode
Binput_ids
CTrue
DFalse
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'encode' instead of 'decode'.
Using 'attention_mask' key instead of 'input_ids'.
Setting skip_special_tokens to False, which keeps special tokens.