Bird
0
0

Which model architecture is best suited to produce a fixed-length vector for classification?

hard📝 Model Choice Q8 of 15
NLP - Sequence Models for NLP
You want to classify movie reviews using a Bidirectional LSTM on sequences of length 50 with 100-dimensional embeddings. Which model architecture is best suited to produce a fixed-length vector for classification?
AEmbedding(input_dim=10000, output_dim=100, input_length=50) -> Bidirectional(LSTM(64)) -> Dense(1, activation='sigmoid')
BEmbedding(input_dim=10000, output_dim=100, input_length=50) -> Bidirectional(LSTM(64, return_sequences=true)) -> Dense(1, activation='sigmoid')
CEmbedding(input_dim=10000, output_dim=100, input_length=50) -> Bidirectional(LSTM(64, return_sequences=true)) -> GlobalMaxPooling1D() -> Dense(1, activation='sigmoid')
DEmbedding(input_dim=10000, output_dim=100, input_length=50) -> LSTM(64) -> Dense(1, activation='sigmoid')
Step-by-Step Solution
Solution:
  1. Step 1: Understand output shape

    Bidirectional LSTM without return_sequences outputs a fixed-length vector.
  2. Step 2: Check model options

    Embedding(input_dim=10000, output_dim=100, input_length=50) -> Bidirectional(LSTM(64)) -> Dense(1, activation='sigmoid') uses Bidirectional LSTM without return_sequences, producing fixed-size vector for Dense.
  3. Step 3: Analyze other options

    Options B and C output sequences, requiring pooling or flattening; D is unidirectional.
  4. Final Answer:

    Embedding -> Bidirectional(LSTM(64)) -> Dense(1, activation='sigmoid') -> Option A
  5. Quick Check:

    Use return_sequences=false for fixed vector output [OK]
Quick Trick: return_sequences=false outputs fixed vector [OK]
Common Mistakes:
MISTAKES
  • Using return_sequences=true without pooling
  • Choosing unidirectional LSTM when bidirectional is needed

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes