Challenge - 5 Problems
LSTM Text Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of LSTM layer with return_sequences=True
What is the shape of the output tensor after passing input of shape (32, 10, 50) through an LSTM layer with 64 units and
return_sequences=True?NLP
import tensorflow as tf input_tensor = tf.random.uniform((32, 10, 50)) lstm_layer = tf.keras.layers.LSTM(64, return_sequences=True) output = lstm_layer(input_tensor) print(output.shape)
Attempts:
2 left
💡 Hint
Remember that
return_sequences=True returns output for each time step.✗ Incorrect
When
return_sequences=True, the LSTM returns output for every time step, so the output shape is (batch_size, time_steps, units). Here, batch_size=32, time_steps=10, units=64.❓ Model Choice
intermediate2:00remaining
Choosing LSTM for text classification
You want to classify movie reviews as positive or negative using their text. Which model architecture is best suited for this task?
Attempts:
2 left
💡 Hint
Think about models that capture word order and context.
✗ Incorrect
LSTM networks are designed to process sequences and capture context over time, making them well suited for text classification tasks where word order matters.
❓ Hyperparameter
advanced2:00remaining
Effect of increasing LSTM units
What is the most likely effect of increasing the number of units in an LSTM layer from 50 to 200 when training on a small text dataset?
Attempts:
2 left
💡 Hint
More units mean more parameters to learn.
✗ Incorrect
Increasing units increases model capacity and parameters, which can cause overfitting on small datasets and increase training time.
❓ Metrics
advanced2:00remaining
Evaluating LSTM text model with imbalanced classes
You trained an LSTM model for spam detection on a dataset where 90% of messages are not spam. Which metric is best to evaluate your model's performance?
Attempts:
2 left
💡 Hint
Think about metrics that handle class imbalance well.
✗ Incorrect
Accuracy can be misleading with imbalanced data. Precision and recall give better insight into how well the model detects the minority class (spam).
🔧 Debug
expert3:00remaining
Why does this LSTM model not learn?
You trained an LSTM model on text data but the training loss does not decrease. The model code is below. What is the most likely cause?
NLP
import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=10000, output_dim=64, input_length=100), tf.keras.layers.LSTM(128), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Training data: X_train shape (1000, 100), y_train shape (1000,) model.fit(X_train, y_train, epochs=10, batch_size=32)
Attempts:
2 left
💡 Hint
Check the input data shape and preprocessing.
✗ Incorrect
LSTM layers require input sequences to be the same length. If sequences are not padded, the model input shape is inconsistent, causing training issues.