0
0
PytorchHow-ToBeginner · 3 min read

How to Resize Images in PyTorch: Simple Guide with Examples

To resize images in PyTorch, use torchvision.transforms.Resize which changes the image size to the specified dimensions. Apply it as part of a transform pipeline on PIL images or tensors before feeding them to your model.
📐

Syntax

The main function to resize images in PyTorch is torchvision.transforms.Resize. You specify the new size as either an integer or a tuple of (height, width). This transform can be applied to PIL images or tensors.

  • Resize(size): size can be an int or tuple. If int, smaller edge is matched to size keeping aspect ratio.
  • Use inside transforms.Compose to chain multiple transforms.
python
from torchvision import transforms

resize_transform = transforms.Resize((128, 128))  # Resize to 128x128 pixels
💻

Example

This example loads an image, resizes it to 128x128 pixels, and converts it to a tensor for model input.

python
from PIL import Image
from torchvision import transforms

# Load an image from file
image = Image.open('sample.jpg')

# Define transform pipeline: resize and convert to tensor
transform = transforms.Compose([
    transforms.Resize((128, 128)),
    transforms.ToTensor()
])

# Apply transform
resized_tensor = transform(image)

# Print shape: should be [channels, height, width]
print(resized_tensor.shape)
Output
torch.Size([3, 128, 128])
⚠️

Common Pitfalls

Common mistakes when resizing images in PyTorch include:

  • Passing a single int to Resize expecting fixed square size, but it resizes smaller edge only, keeping aspect ratio.
  • Trying to resize tensors directly without converting to PIL images or using appropriate tensor transforms.
  • Not chaining Resize with ToTensor, causing errors when feeding to models.
python
from torchvision import transforms

# Wrong: single int resizes smaller edge, not both dimensions fixed
resize_wrong = transforms.Resize(128)  # Resizes smaller edge to 128, aspect ratio kept

# Right: use tuple for fixed size
resize_right = transforms.Resize((128, 128))
📊

Quick Reference

FunctionDescriptionExample Usage
transforms.ResizeResize image to given sizetransforms.Resize((128, 128))
transforms.ToTensorConvert PIL image to tensortransforms.ToTensor()
transforms.ComposeChain multiple transformstransforms.Compose([transforms.Resize((128, 128)), transforms.ToTensor()])

Key Takeaways

Use torchvision.transforms.Resize with a tuple to set exact image size.
Resize works on PIL images; convert tensors properly before resizing.
Chain Resize with ToTensor to prepare images for models.
Passing a single int to Resize keeps aspect ratio by resizing smaller edge only.
Always check output tensor shape to confirm resizing worked as expected.