0
0
NLPml~20 mins

LSTM for text in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LSTM Text Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of LSTM layer with return_sequences=True
What is the shape of the output tensor after passing input of shape (32, 10, 50) through an LSTM layer with 64 units and return_sequences=True?
NLP
import tensorflow as tf
input_tensor = tf.random.uniform((32, 10, 50))
lstm_layer = tf.keras.layers.LSTM(64, return_sequences=True)
output = lstm_layer(input_tensor)
print(output.shape)
A(10, 64)
B(32, 64)
C(32, 10, 64)
D(32, 10, 50)
Attempts:
2 left
💡 Hint
Remember that return_sequences=True returns output for each time step.
Model Choice
intermediate
2:00remaining
Choosing LSTM for text classification
You want to classify movie reviews as positive or negative using their text. Which model architecture is best suited for this task?
AA simple feedforward neural network with bag-of-words input
BAn LSTM network processing word sequences
CA convolutional neural network on raw text characters
DA linear regression model on word counts
Attempts:
2 left
💡 Hint
Think about models that capture word order and context.
Hyperparameter
advanced
2:00remaining
Effect of increasing LSTM units
What is the most likely effect of increasing the number of units in an LSTM layer from 50 to 200 when training on a small text dataset?
AThe model may overfit and training time will increase
BThe model will ignore word order
CThe model will train faster and generalize better
DThe model will underfit due to fewer parameters
Attempts:
2 left
💡 Hint
More units mean more parameters to learn.
Metrics
advanced
2:00remaining
Evaluating LSTM text model with imbalanced classes
You trained an LSTM model for spam detection on a dataset where 90% of messages are not spam. Which metric is best to evaluate your model's performance?
AAccuracy
BConfusion matrix only
CMean Squared Error
DPrecision and Recall
Attempts:
2 left
💡 Hint
Think about metrics that handle class imbalance well.
🔧 Debug
expert
3:00remaining
Why does this LSTM model not learn?
You trained an LSTM model on text data but the training loss does not decrease. The model code is below. What is the most likely cause?
NLP
import tensorflow as tf
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(input_dim=10000, output_dim=64, input_length=100),
    tf.keras.layers.LSTM(128),
    tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Training data: X_train shape (1000, 100), y_train shape (1000,)
model.fit(X_train, y_train, epochs=10, batch_size=32)
AThe input sequences are not padded to the same length
BThe Embedding layer output_dim is too small
CThe model lacks an activation function in the LSTM layer
DThe loss function is incorrect for binary classification
Attempts:
2 left
💡 Hint
Check the input data shape and preprocessing.