Complete the code to create a bidirectional RNN layer in PyTorch.
import torch.nn as nn rnn = nn.RNN(input_size=10, hidden_size=20, num_layers=1, bidirectional=[1])
Setting bidirectional=True creates a bidirectional RNN layer that processes input sequences in both forward and backward directions.
Complete the code to get the output shape of a bidirectional RNN layer.
output, hidden = rnn(input_seq) output_shape = output.shape # (seq_len, batch, [1] * hidden_size)
For a bidirectional RNN, the output's last dimension is 2 * hidden_size because it concatenates forward and backward outputs.
Fix the error in the code to correctly initialize a bidirectional LSTM.
lstm = nn.LSTM(input_size=15, hidden_size=30, num_layers=2, bidirectional=[1])
The bidirectional parameter must be a boolean. Setting it to True enables bidirectional LSTM.
Complete the code to slice the forward and backward outputs from a bidirectional RNN.
forward_out = output[:, :, :[1]] backward_out = output[:, :, [1]:[2]]
2 * hidden_size for the forward slice.Forward output is the first hidden_size features; backward is from hidden_size to 2 * hidden_size.
Fill all three blanks to extract forward and backward hidden states using slicing.
forward_hn = hn[[1]::[2]] backward_hn = hn[[3]::[2]]
forward_hn = hn[0::2] and backward_hn = hn[1::2] separate forward (even indices) and backward (odd indices) across all layers.