Complete the code to create a Flatten layer in PyTorch.
import torch.nn as nn flatten = nn.[1]()
The Flatten layer reshapes the input tensor into a 2D tensor, keeping the batch size and flattening the rest.
Complete the code to flatten the input tensor except the batch dimension.
import torch x = torch.randn(10, 3, 28, 28) # batch of 10 images x_flat = x.[1](start_dim=1)
The flatten method with start_dim=1 flattens all dimensions except the batch dimension.
Fix the error in the code to correctly flatten the input tensor.
import torch import torch.nn as nn class Model(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() def forward(self, x): x = self.flatten(x) x = x.[1](1) return x
The flatten method with argument 1 flattens all dimensions except the batch dimension. Using flatten again is correct here.
Fill both blanks to create a dictionary comprehension that maps each word to its length if length is greater than 3.
words = ['apple', 'cat', 'banana', 'dog'] lengths = {word: [1] for word in words if [2]
The dictionary comprehension maps each word to its length using len(word). The condition len(word) > 3 filters words longer than 3 characters.
Fill all three blanks to create a dictionary comprehension that maps uppercase words to their lengths if length is less than 6.
words = ['apple', 'cat', 'banana', 'dog'] result = { [1]: [2] for word in words if [3] }
The dictionary comprehension maps the uppercase version of each word to its length. The condition filters words with length less than 6.