0
0
NLPml~8 mins

BERT tokenization (WordPiece) in NLP - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - BERT tokenization (WordPiece)
Which metric matters for BERT tokenization (WordPiece) and WHY

BERT tokenization breaks words into smaller pieces called tokens. The key metric to check is tokenization coverage, which shows how well the tokenizer splits words into known pieces. Good coverage means fewer unknown tokens, helping the model understand text better.

Another important metric is tokenization consistency, ensuring the same word is split the same way every time. This helps the model learn stable word meanings.

Confusion matrix or equivalent visualization

Instead of a confusion matrix, we use a tokenization example comparison to see how words are split:

Original text: "unhappiness"
WordPiece tokens: ["un", "##happy", "##ness"]

Unknown tokens: 0

Coverage: 100% known tokens

If unknown tokens appear, e.g. "unhappyness" might tokenize as ["un", "##happ", "##yn", "##ess"] with some unknown pieces.
    
Precision vs Recall tradeoff (or equivalent) with concrete examples

For tokenization, the tradeoff is between vocabulary size and token granularity:

  • A large vocabulary means fewer splits, so tokens are more precise (like whole words). But it needs more memory and can miss rare words.
  • A small vocabulary means more splits into subwords, increasing recall of rare words but making tokens less precise and longer sequences.

Example: "playing" can be one token or split into "play" + "##ing". Smaller vocab helps handle new words like "playings" by splitting.

What "good" vs "bad" metric values look like for BERT tokenization

Good tokenization:

  • High coverage: Most words split into known tokens (e.g., > 95% coverage)
  • Consistent splits: Same words always tokenized the same way
  • Balanced vocabulary size: Not too big or too small

Bad tokenization:

  • Many unknown tokens, hurting model understanding
  • Inconsistent token splits causing confusion
  • Too large vocabulary causing slow training or too small causing long token sequences
Metrics pitfalls
  • Ignoring unknown tokens: Overlooking unknown tokens can hide poor coverage.
  • Overfitting vocabulary: Making vocabulary too specific to training data hurts generalization.
  • Long token sequences: Too many splits increase sequence length, slowing training and inference.
  • Inconsistent tokenization: Different splits for same words confuse the model.
Self-check question

Your tokenizer has 98% coverage but splits common words inconsistently. Is it good?

Answer: No. High coverage is good, but inconsistent splits confuse the model. Both coverage and consistency matter for good tokenization.

Key Result
High token coverage and consistent token splits are key to effective BERT WordPiece tokenization.