Bird
0
0

You want to embed sentences of varying lengths using an embedding layer followed by an LSTM. Which preprocessing step is necessary before feeding input to the embedding layer?

hard📝 Application Q9 of 15
NLP - Sequence Models for NLP
You want to embed sentences of varying lengths using an embedding layer followed by an LSTM. Which preprocessing step is necessary before feeding input to the embedding layer?
AConvert sentences to one-hot vectors
BPad or truncate sentences to a fixed length
CNormalize word embeddings to unit length
DShuffle words randomly in each sentence
Step-by-Step Solution
Solution:
  1. Step 1: Understand embedding input requirements

    Embedding layers require fixed-length sequences, so sentences must be padded or truncated.
  2. Step 2: Evaluate other options

    One-hot vectors are not input to embedding layers; normalization and shuffling are unrelated preprocessing steps.
  3. Final Answer:

    Pad or truncate sentences to a fixed length -> Option B
  4. Quick Check:

    Embedding input = fixed-length integer sequences [OK]
Quick Trick: Pad sequences to fixed length before embedding [OK]
Common Mistakes:
MISTAKES
  • Skipping padding
  • Using one-hot vectors as input
  • Shuffling words randomly

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes