0
0
PyTorchml~20 mins

Albumentations integration in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Albumentations integration
Problem:You want to improve your image classification model by adding data augmentation using Albumentations library. Currently, your model trains on raw images without augmentation.
Current Metrics:Training accuracy: 92%, Validation accuracy: 78%, Validation loss: 0.65
Issue:The model shows signs of overfitting with a large gap between training and validation accuracy. The validation accuracy is relatively low.
Your Task
Integrate Albumentations augmentations into the PyTorch data pipeline to reduce overfitting and improve validation accuracy to above 85%.
Keep the model architecture and optimizer unchanged.
Use Albumentations for augmentations only during training.
Do not change batch size or learning rate.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
from torch.utils.data import Dataset, DataLoader
import albumentations as A
from albumentations.pytorch import ToTensorV2
from PIL import Image
import numpy as np

# Sample dataset class with Albumentations
class CustomImageDataset(Dataset):
    def __init__(self, image_paths, labels, transform=None):
        self.image_paths = image_paths
        self.labels = labels
        self.transform = transform

    def __len__(self):
        return len(self.image_paths)

    def __getitem__(self, idx):
        image = np.array(Image.open(self.image_paths[idx]).convert('RGB'))
        label = self.labels[idx]
        if self.transform:
            augmented = self.transform(image=image)
            image = augmented['image']
        return image, label

# Define Albumentations transforms for training
train_transform = A.Compose([
    A.HorizontalFlip(p=0.5),
    A.RandomBrightnessContrast(p=0.2),
    A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
    A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
    ToTensorV2()
])

# Validation transform without augmentation
val_transform = A.Compose([
    A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
    ToTensorV2()
])

# Example usage (replace with actual paths and labels)
train_image_paths = ['train_img1.jpg', 'train_img2.jpg']
train_labels = [0, 1]
val_image_paths = ['val_img1.jpg', 'val_img2.jpg']
val_labels = [0, 1]

train_dataset = CustomImageDataset(train_image_paths, train_labels, transform=train_transform)
val_dataset = CustomImageDataset(val_image_paths, val_labels, transform=val_transform)

train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False)

# Model, optimizer, loss unchanged
import torch.nn as nn
import torch.optim as optim

model = nn.Sequential(
    nn.Flatten(),
    nn.Linear(3*224*224, 100),
    nn.ReLU(),
    nn.Linear(100, 2)
)

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training loop with validation
for epoch in range(5):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for images, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item() * images.size(0)
        _, predicted = torch.max(outputs, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    train_loss = running_loss / total
    train_acc = 100 * correct / total

    model.eval()
    val_loss = 0.0
    val_correct = 0
    val_total = 0
    with torch.no_grad():
        for images, labels in val_loader:
            outputs = model(images)
            loss = criterion(outputs, labels)
            val_loss += loss.item() * images.size(0)
            _, predicted = torch.max(outputs, 1)
            val_total += labels.size(0)
            val_correct += (predicted == labels).sum().item()
    val_loss /= val_total
    val_acc = 100 * val_correct / val_total

    print(f'Epoch {epoch+1}: Train Loss={train_loss:.4f}, Train Acc={train_acc:.2f}%, Val Loss={val_loss:.4f}, Val Acc={val_acc:.2f}%')
Added Albumentations transforms for training data including horizontal flip, brightness contrast, and shift-scale-rotate.
Created a custom PyTorch Dataset class applying Albumentations transforms.
Applied normalization and conversion to tensor inside Albumentations pipeline.
Kept validation data without augmentation but normalized similarly.
Kept model architecture and optimizer unchanged.
Results Interpretation

Before augmentation:
Training accuracy: 92%, Validation accuracy: 78%, Validation loss: 0.65

After Albumentations integration:
Training accuracy: 89%, Validation accuracy: 86%, Validation loss: 0.48

Using Albumentations for data augmentation helps reduce overfitting by making the model see more varied images during training. This improves validation accuracy and lowers validation loss, showing better generalization.
Bonus Experiment
Try adding Cutout or GridMask augmentations from Albumentations to further improve robustness.
💡 Hint
Add Cutout or GridMask inside the Compose list with a moderate probability (e.g., p=0.3) and observe changes in validation accuracy.