0
0
NLPml~5 mins

BERT tokenization (WordPiece) in NLP - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is the main purpose of BERT tokenization using WordPiece?
To split words into smaller subword units so that rare or unknown words can be represented as combinations of known pieces, improving the model's understanding of language.
Click to reveal answer
beginner
How does WordPiece handle unknown words during tokenization?
It breaks unknown words into smaller known subword units, starting from the beginning of the word and adding pieces until the whole word is covered, allowing the model to understand new words from familiar parts.
Click to reveal answer
beginner
Why does WordPiece add '##' before some tokens?
The '##' symbol marks that the token is a continuation of a previous token and not a standalone word, helping the model know how subwords connect to form full words.
Click to reveal answer
intermediate
Explain the difference between a word and a WordPiece token in BERT tokenization.
A word is a complete unit of language, while a WordPiece token can be a full word or a smaller part of a word. WordPiece tokens allow BERT to handle rare or new words by breaking them into known pieces.
Click to reveal answer
intermediate
What is the advantage of using WordPiece tokenization over simple word-level tokenization?
WordPiece reduces the vocabulary size and handles rare or new words better by splitting them into subwords, which helps the model learn more efficiently and generalize to unseen words.
Click to reveal answer
What does the '##' symbol indicate in WordPiece tokens?
AThe token is a suffix or continuation of a previous token
BThe token is a prefix of a word
CThe token is an unknown word
DThe token is a standalone word
Why does BERT use WordPiece tokenization instead of splitting only by spaces?
ATo increase vocabulary size
BTo handle rare and unknown words by breaking them into smaller parts
CTo remove punctuation
DTo translate words into another language
If the word 'unhappiness' is unknown, how might WordPiece tokenize it?
A['unhappiness']
B['un', '##happiness']
C['un', '##happy', '##ness']
D['unh', '##app', '##iness']
What is a key benefit of having a smaller vocabulary with WordPiece?
ALess accurate predictions
BMore complex model architecture
CMore memory usage
DFaster training and better handling of rare words
Which of these is NOT true about WordPiece tokenization?
AIt always treats each word as a single token
BIt uses '##' to mark subword continuations
CIt splits words into subwords
DIt helps handle unknown words
Describe how BERT's WordPiece tokenization works and why it is useful.
Think about how breaking words into smaller parts helps the model.
You got /5 concepts.
    Explain the role of the '##' symbol in WordPiece tokens and give an example.
    Consider how subwords connect to form full words.
    You got /3 concepts.