0
0
PyTorchml~5 mins

nn.Conv2d layers in PyTorch

Choose your learning style9 modes available
Introduction
A Conv2d layer helps a computer see patterns in images by sliding small filters over the picture to find edges, shapes, or colors.
When you want a computer to recognize objects in photos.
When building apps that detect faces or handwriting.
When analyzing medical images like X-rays.
When creating filters for image effects.
When you want to reduce image size but keep important details.
Syntax
PyTorch
nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
in_channels is the number of input image channels (e.g., 3 for color images).
out_channels is how many filters you want to apply to the image.
Examples
Creates a Conv2d layer that takes a color image (3 channels) and applies 16 filters of size 3x3.
PyTorch
conv = nn.Conv2d(3, 16, 3)
Creates a Conv2d layer for grayscale images (1 channel) with 32 filters of size 5x5, moving 2 pixels at a time, and adds 2 pixels padding.
PyTorch
conv = nn.Conv2d(1, 32, 5, stride=2, padding=2)
Creates a Conv2d layer with 10 input channels and 20 filters of size 3x5, without adding bias.
PyTorch
conv = nn.Conv2d(10, 20, (3, 5), bias=False)
Sample Model
This code creates a Conv2d layer that takes 1-channel images and outputs 2 channels using 3x3 filters. It applies this layer to a simple 5x5 image with values from 0 to 24. It prints the output shape and values, plus the shapes of the weights and bias.
PyTorch
import torch
import torch.nn as nn

# Create a Conv2d layer
conv = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=3, stride=1, padding=1)

# Create a dummy grayscale image batch: batch size 1, 1 channel, 5x5 pixels
input_tensor = torch.arange(25, dtype=torch.float32).reshape(1, 1, 5, 5)

# Apply the Conv2d layer
output = conv(input_tensor)

# Print output shape and values
print('Output shape:', output.shape)
print('Output tensor:', output)

# Print layer weights shape
print('Weights shape:', conv.weight.shape)
print('Bias shape:', conv.bias.shape)
OutputSuccess
Important Notes
Padding adds pixels around the image edges to keep size after convolution.
Stride controls how far the filter moves each step; bigger stride means smaller output.
Kernel size is the filter size; common sizes are 3x3 or 5x5.
Summary
nn.Conv2d slides filters over images to find patterns.
You set input channels, output channels, and filter size when creating it.
It helps computers understand images by focusing on small parts at a time.