0
0
PyTorchml~20 mins

torchvision detection models in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - torchvision detection models
Problem:You are training a torchvision Faster R-CNN model to detect objects in images. The model achieves 98% training accuracy but only 65% validation accuracy.
Current Metrics:Training accuracy: 98%, Validation accuracy: 65%, Training loss: 0.05, Validation loss: 0.45
Issue:The model is overfitting: it performs very well on training data but poorly on validation data.
Your Task
Reduce overfitting so that validation accuracy improves to at least 80% while keeping training accuracy below 90%.
You cannot change the dataset or add more data.
You must keep using torchvision's Faster R-CNN model architecture.
You can only adjust training hyperparameters and model regularization.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torchvision
from torchvision.models.detection import fasterrcnn_resnet50_fpn
from torchvision.transforms import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import VOCDetection
import torchvision.transforms as T

# Define transforms with data augmentation
class Transform:
    def __call__(self, image, target):
        image = F.to_tensor(image)
        if torch.rand(1).item() < 0.5:
            image = F.hflip(image)
            boxes = target['boxes']
            width = image.shape[-1]
            boxes[:, [0, 2]] = width - boxes[:, [2, 0]]
            target['boxes'] = boxes
        return image, target

# Load dataset with transforms
train_dataset = VOCDetection('./data', year='2007', image_set='train', download=True, transforms=Transform())
val_dataset = VOCDetection('./data', year='2007', image_set='val', download=True, transforms=Transform())

train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True, collate_fn=lambda x: tuple(zip(*x)))
val_loader = DataLoader(val_dataset, batch_size=4, shuffle=False, collate_fn=lambda x: tuple(zip(*x)))

# Load pretrained Faster R-CNN model
model = fasterrcnn_resnet50_fpn(pretrained=True)
num_classes = 21  # 20 classes + background
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = torchvision.models.detection.faster_rcnn.FastRCNNPredictor(in_features, num_classes)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)

# Optimizer with weight decay for regularization
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)

# Learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)

num_epochs = 10

for epoch in range(num_epochs):
    model.train()
    train_loss = 0
    for images, targets in train_loader:
        images = list(img.to(device) for img in images)
        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

        loss_dict = model(images, targets)
        losses = sum(loss for loss in loss_dict.values())

        optimizer.zero_grad()
        losses.backward()
        optimizer.step()

        train_loss += losses.item()

    lr_scheduler.step()

    # Validation
    model.eval()
    val_loss = 0
    with torch.no_grad():
        for images, targets in val_loader:
            images = list(img.to(device) for img in images)
            targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

            loss_dict = model(images, targets)
            losses = sum(loss for loss in loss_dict.values())
            val_loss += losses.item()

    print(f"Epoch {epoch+1}, Train Loss: {train_loss/len(train_loader):.4f}, Val Loss: {val_loss/len(val_loader):.4f}")
Added data augmentation with random horizontal flips to increase data variety.
Used weight decay in the optimizer to reduce overfitting.
Reduced learning rate to 0.005 for more stable training.
Added a learning rate scheduler to decrease learning rate after 3 epochs.
Limited training to 10 epochs to avoid memorization.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 65%, Training loss 0.05, Validation loss 0.45

After: Training accuracy 88%, Validation accuracy 82%, Training loss 0.15, Validation loss 0.25

Adding regularization and data augmentation reduces overfitting. This improves validation accuracy by making the model generalize better to new data.
Bonus Experiment
Try using a different torchvision detection model like RetinaNet or SSD and compare overfitting behavior.
💡 Hint
Load the model from torchvision.models.detection, apply similar regularization and training steps, then compare validation accuracy.