Bird
0
0

Why does FastText produce embeddings for words not seen during training, unlike traditional word2vec embeddings?

hard📝 Conceptual Q10 of 15
NLP - Word Embeddings
Why does FastText produce embeddings for words not seen during training, unlike traditional word2vec embeddings?
ABecause it ignores word order completely
BBecause it stores all possible words in a dictionary
CBecause it uses character n-grams to build word vectors from subword units
DBecause it uses one-hot encoding for unknown words
Step-by-Step Solution
Solution:
  1. Step 1: Recall FastText's subword modeling

    FastText breaks words into character n-grams, allowing it to create vectors for unseen words by combining subword vectors.
  2. Step 2: Contrast with traditional word2vec

    Word2vec treats words as atomic units and cannot generate vectors for unseen words.
  3. Final Answer:

    Because it uses character n-grams to build word vectors from subword units -> Option C
  4. Quick Check:

    Subword n-grams enable OOV embeddings = A [OK]
Quick Trick: Subword n-grams let FastText embed new words [OK]
Common Mistakes:
MISTAKES
  • Thinking FastText stores all words explicitly
  • Believing it ignores word order
  • Confusing one-hot encoding with embeddings

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes