Bird
0
0

Why does the embedding layer output have 3 dimensions for input shape (batch_size, sequence_length)?

hard📝 Conceptual Q10 of 15
NLP - Sequence Models for NLP
Why does the embedding layer output have 3 dimensions for input shape (batch_size, sequence_length)?
ABecause each token is converted to a vector, adding the embedding dimension
BBecause the embedding layer adds a batch dimension
CBecause the embedding layer converts integers to one-hot vectors
DBecause the embedding layer flattens the input sequences
Step-by-Step Solution
Solution:
  1. Step 1: Analyze input and output dimensions

    Input has 2 dimensions: batch size and sequence length. Embedding converts each token to a vector, adding a third dimension.
  2. Step 2: Check other options

    Embedding does not add batch dimension, does not convert to one-hot, and does not flatten sequences.
  3. Final Answer:

    Because each token is converted to a vector, adding the embedding dimension -> Option A
  4. Quick Check:

    Embedding output dims = input dims + embedding vector size [OK]
Quick Trick: Embedding adds vector dimension per token, increasing output dims [OK]
Common Mistakes:
MISTAKES
  • Thinking embedding adds batch dimension
  • Confusing embedding with one-hot encoding
  • Assuming embedding flattens input

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes