Bird
0
0

Examine the following PyTorch GRU usage:

medium📝 Debug Q6 of 15
NLP - Sequence Models for NLP
Examine the following PyTorch GRU usage:
gru = nn.GRU(input_size=20, hidden_size=40)
input_tensor = torch.randn(10, 5, 20)
output, hidden = gru(input_tensor, hidden=None)

What is the issue with this code?
AThe input tensor shape does not match the expected shape when batch_first=False
BThe hidden_size must be equal to input_size
CThe GRU layer requires batch_first=True for this input
DThe hidden state should not be passed as None explicitly
Step-by-Step Solution
Solution:
  1. Step 1: Check default batch_first value

    By default, batch_first=False, so input shape should be (seq_len, batch, input_size).
  2. Step 2: Compare input shape

    Input tensor shape is (10, 5, 20) which is (seq_len=10, batch=5, input_size=20) - this matches default.
  3. Step 3: Check hidden argument

    Passing hidden=None is not necessary; if omitted, GRU initializes hidden state to zeros automatically.
  4. Step 4: Identify the problem

    Explicitly passing hidden=None is redundant and can cause confusion.
  5. Final Answer:

    The hidden state should not be passed as None explicitly -> Option D
  6. Quick Check:

    GRU initializes hidden state automatically if not provided [OK]
Quick Trick: Do not pass hidden=None explicitly; let GRU handle initialization [OK]
Common Mistakes:
MISTAKES
  • Assuming batch_first=True by default
  • Passing hidden=None unnecessarily
  • Confusing hidden_size with input_size

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes