0
0
PyTorchml~20 mins

nn.LSTM layer in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LSTM Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output shape of nn.LSTM with batch_first=True
Consider the following PyTorch code snippet using nn.LSTM with batch_first=True. What is the shape of the output tensor `out`?
PyTorch
import torch
import torch.nn as nn

lstm = nn.LSTM(input_size=10, hidden_size=20, num_layers=2, batch_first=True)
input_tensor = torch.randn(5, 7, 10)  # batch=5, seq_len=7, input_size=10
out, (h_n, c_n) = lstm(input_tensor)
print(out.shape)
Atorch.Size([7, 5, 20])
Btorch.Size([5, 20, 7])
Ctorch.Size([7, 20, 5])
Dtorch.Size([5, 7, 20])
Attempts:
2 left
💡 Hint
Remember that batch_first=True means the batch dimension is the first dimension in the input and output.
Model Choice
intermediate
2:00remaining
Choosing LSTM parameters for sequence classification
You want to build an LSTM model to classify sequences of length 15 with 8 features each into 3 classes. Which nn.LSTM configuration is correct to output a tensor suitable for classification after processing the entire sequence?
Ann.LSTM(input_size=3, hidden_size=8, num_layers=2, batch_first=True)
Bnn.LSTM(input_size=15, hidden_size=3, num_layers=1, batch_first=False)
Cnn.LSTM(input_size=8, hidden_size=16, num_layers=1, batch_first=True)
Dnn.LSTM(input_size=8, hidden_size=15, num_layers=3, batch_first=False)
Attempts:
2 left
💡 Hint
Input size should match the feature dimension per time step.
Hyperparameter
advanced
2:00remaining
Effect of increasing num_layers in nn.LSTM
What is the main effect of increasing the num_layers parameter in nn.LSTM from 1 to 3?
AThe LSTM will have 3 stacked layers, allowing it to learn more complex temporal patterns.
BThe input size of the LSTM will automatically triple.
CThe output tensor shape will change from 3D to 2D.
DThe LSTM will process sequences in reverse order.
Attempts:
2 left
💡 Hint
Think about what stacking layers means in neural networks.
🔧 Debug
advanced
2:00remaining
Identifying error in LSTM input shape
What error will this code raise when running the LSTM forward pass?
PyTorch
import torch
import torch.nn as nn

lstm = nn.LSTM(input_size=5, hidden_size=10, batch_first=True)
input_tensor = torch.randn(4, 6, 4)  # batch=4, seq_len=6, input_size=4
out, (h_n, c_n) = lstm(input_tensor)
ARuntimeError: input size mismatch. Expected input_size=5 but got 4.
BTypeError: LSTM input must be 2D tensor.
CValueError: batch_first must be False for this input shape.
DNo error, code runs successfully.
Attempts:
2 left
💡 Hint
Check the input tensor's last dimension against the LSTM's input_size.
Metrics
expert
2:00remaining
Interpreting LSTM hidden state shapes after forward pass
After running an nn.LSTM with num_layers=2, hidden_size=8, batch_first=True on input of shape (3, 10, 6), what is the shape of the hidden state tensor h_n?
Atorch.Size([10, 3, 8])
Btorch.Size([2, 3, 8])
Ctorch.Size([3, 10, 8])
Dtorch.Size([3, 2, 8])
Attempts:
2 left
💡 Hint
Hidden state shape is (num_layers, batch_size, hidden_size).