Complete the code to create an LSTM layer with input size 10 and hidden size 20.
lstm = nn.LSTM(input_size=[1], hidden_size=20)
The input_size parameter defines the number of expected features in the input. Here, it should be 10.
Complete the code to initialize hidden and cell states for an LSTM with batch size 3 and hidden size 5.
h0 = torch.zeros(1, [1], 5) c0 = torch.zeros(1, [1], 5)
The second dimension is the batch size, which should be 3 here.
Fix the error in the code to run input through the LSTM layer.
output, (hn, cn) = lstm([1])The variable input_tensor is the correct input tensor to pass to the LSTM.
Fill both blanks to create an LSTM with 2 layers and batch_first enabled.
lstm = nn.LSTM(input_size=10, hidden_size=20, num_layers=[1], batch_first=[2])
num_layers=2 creates two stacked LSTM layers. batch_first=True makes input shape (batch, seq, feature).
Fill all three blanks to extract the last hidden state from the LSTM output.
last_hidden = hn[[1], [2], [3]]
hn[-1, 0, 1] accesses the last layer's hidden state for the first batch element and second hidden unit.