0
0
TensorFlowml~20 mins

LSTM layer in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LSTM Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output shape of LSTM layer with return_sequences

Consider the following TensorFlow code snippet creating an LSTM layer:

import tensorflow as tf

lstm_layer = tf.keras.layers.LSTM(10, return_sequences=True)
input_tensor = tf.random.uniform((32, 5, 8))  # batch=32, time_steps=5, features=8
output = lstm_layer(input_tensor)
print(output.shape)

What is the printed output shape?

TensorFlow
import tensorflow as tf

lstm_layer = tf.keras.layers.LSTM(10, return_sequences=True)
input_tensor = tf.random.uniform((32, 5, 8))
output = lstm_layer(input_tensor)
print(output.shape)
A(5, 32, 10)
B(32, 10)
C(32, 5, 10)
D(32, 8, 10)
Attempts:
2 left
💡 Hint

Remember that return_sequences=True makes the LSTM return output for each time step.

Model Choice
intermediate
1:30remaining
Choosing LSTM for sequence data

You want to build a model to predict the next word in a sentence based on previous words. Which model layer is best suited for this task?

AConvolutional layer
BLSTM layer
CDense layer with ReLU activation
DDropout layer
Attempts:
2 left
💡 Hint

Think about which layer can remember information over time steps.

Hyperparameter
advanced
1:30remaining
Effect of increasing LSTM units

What is the most likely effect of increasing the number of units in an LSTM layer from 50 to 200?

AModel capacity increases, possibly improving accuracy but increasing risk of overfitting
BModel will train faster due to fewer parameters
CModel will ignore earlier time steps
DModel will require less memory
Attempts:
2 left
💡 Hint

More units mean more parameters and complexity.

Metrics
advanced
1:30remaining
Interpreting LSTM training loss curve

During training an LSTM model, you observe the training loss steadily decreases but the validation loss starts increasing after some epochs. What does this indicate?

AThere is a bug in the loss calculation
BThe model is underfitting the training data
CThe model has converged perfectly
DThe model is overfitting the training data
Attempts:
2 left
💡 Hint

Think about what it means when validation loss worsens but training loss improves.

🔧 Debug
expert
2:30remaining
Identifying error in LSTM input shape

Given this code snippet:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.LSTM(32, input_shape=(10,)),
    tf.keras.layers.Dense(1)
])

input_data = tf.random.uniform((64, 10, 5))
output = model(input_data)
print(output.shape)

What error will this code raise?

AValueError: Input 0 of layer 'lstm' is incompatible with the layer: expected shape=(None, 10), found shape=(64, 10, 5)
BTypeError: Cannot convert input data to tensor
CRuntimeError: LSTM layer requires 3D input but got 2D
DNo error, output shape is (64, 1)
Attempts:
2 left
💡 Hint

Check the input_shape parameter and the actual input data shape.