Complete the code to define a convolutional layer in PyTorch.
conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=[1], stride=1, padding=1)
The kernel size of 3 is commonly used in CNNs for image classification to capture local features.
Complete the code to add a max pooling layer after the convolution.
pool = nn.MaxPool2d(kernel_size=[1], stride=2)
Max pooling with kernel size 2 and stride 2 reduces the spatial dimensions by half, which is common in CNNs.
Fix the error in the forward method to apply ReLU activation after convolution.
def forward(self, x): x = self.conv1(x) x = nn.[1]()(x) x = self.pool(x) return x
ReLU is the standard activation function used after convolution layers to introduce non-linearity.
Fill both blanks to flatten the tensor and pass it to a fully connected layer.
x = x.[1](x.size(0), -1) x = self.[2](x)
Use view to flatten the tensor and then pass it to the fully connected layer named fc.
Fill all three blanks to define the final output layer with correct input size and activation.
self.fc = nn.Linear([1], [2]) output = self.fc(x) output = nn.[3]()(output)
The final linear layer maps 128 features to 10 classes, and LogSoftmax is used for classification output probabilities.