Challenge - 5 Problems
RNN Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output shape of nn.RNN with batch_first=True
Consider the following PyTorch code using nn.RNN with batch_first=True. What is the shape of the output tensor `out`?
PyTorch
import torch import torch.nn as nn rnn = nn.RNN(input_size=5, hidden_size=3, num_layers=1, batch_first=True) input_tensor = torch.randn(4, 7, 5) # batch=4, seq_len=7, input_size=5 out, hn = rnn(input_tensor) print(out.shape)
Attempts:
2 left
💡 Hint
Remember that batch_first=True means the batch dimension is the first dimension in the input and output.
✗ Incorrect
With batch_first=True, the input and output tensors have shape (batch_size, seq_len, hidden_size). Here, batch_size=4, seq_len=7, hidden_size=3, so output shape is (4, 7, 3).
❓ Model Choice
intermediate2:00remaining
Choosing RNN parameters for sequence classification
You want to build a simple RNN model to classify sequences of length 10 with 8 features each into 4 classes. Which nn.RNN configuration is correct for this task?
Attempts:
2 left
💡 Hint
Input size should match the number of features per time step.
✗ Incorrect
The input_size must be 8 because each time step has 8 features. batch_first=True is common for batch processing. hidden_size can be any positive integer; 16 is reasonable. num_layers=1 is fine for a simple model.
❓ Hyperparameter
advanced2:00remaining
Effect of increasing num_layers in nn.RNN
What is the main effect of increasing the num_layers parameter in nn.RNN from 1 to 3?
Attempts:
2 left
💡 Hint
Think about what stacking layers means in neural networks.
✗ Incorrect
Increasing num_layers stacks multiple RNN layers. Each layer processes the output of the previous, enabling the model to learn more complex temporal features. It does not change input size or speed directly.
🔧 Debug
advanced2:00remaining
Identifying error in nn.RNN input shape
What error will this code raise when running the RNN forward pass?
PyTorch
import torch import torch.nn as nn rnn = nn.RNN(input_size=6, hidden_size=4, num_layers=1) input_tensor = torch.randn(5, 6, 6) # batch=5, seq_len=6, input_size=6 out, hn = rnn(input_tensor)
Attempts:
2 left
💡 Hint
Check the default expected input shape for nn.RNN when batch_first is False.
✗ Incorrect
By default, nn.RNN expects input shape (seq_len, batch, input_size). The input tensor here has shape (batch, seq_len, input_size), causing a RuntimeError.
❓ Metrics
expert2:00remaining
Interpreting training loss behavior of nn.RNN model
You train an nn.RNN model for sequence prediction. The training loss decreases steadily, but the validation loss starts increasing after some epochs. What does this indicate?
Attempts:
2 left
💡 Hint
Think about what it means when validation loss increases while training loss decreases.
✗ Incorrect
When training loss decreases but validation loss increases, the model fits training data too closely and fails to generalize, which is called overfitting.