Complete the code to create a 2D convolutional layer with a kernel size of 3.
conv = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=[1])
The kernel size defines the size of the filter that slides over the input image. Here, 3 means a 3x3 filter.
Complete the code to set the stride of the convolutional layer to 2.
conv = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=[1])
Stride controls how much the filter moves at each step. A stride of 2 means the filter moves 2 pixels at a time.
Fix the error in the code by choosing the correct padding value to keep output size same as input size with kernel size 3 and stride 1.
conv = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=[1])
Padding of 1 adds a border of zeros around the input, keeping output size same when kernel size is 3 and stride is 1.
Fill both blanks to create a convolutional layer with kernel size 5 and stride 2.
conv = torch.nn.Conv2d(in_channels=3, out_channels=6, kernel_size=[1], stride=[2])
Kernel size 5 means a 5x5 filter. Stride 2 means the filter moves 2 pixels at a time.
Fill all three blanks to create a convolutional layer with kernel size 7, stride 1, and padding 3.
conv = torch.nn.Conv2d(in_channels=[1], out_channels=[2], kernel_size=[3], stride=1, padding=3)
in_channels=3 for RGB images, out_channels=6 for 6 filters, kernel_size=7 for a 7x7 filter, stride=1 and padding=3 to keep output size.