0
0
PyTorchml~20 mins

Faster R-CNN usage in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Faster R-CNN usage
Problem:You want to detect objects in images using Faster R-CNN with PyTorch. The current model trains well on the training set but performs poorly on the validation set, showing signs of overfitting.
Current Metrics:Training mAP: 85%, Validation mAP: 60%
Issue:The model overfits: training mean Average Precision (mAP) is high but validation mAP is much lower, indicating poor generalization.
Your Task
Reduce overfitting to improve validation mAP from 60% to at least 75%, while keeping training mAP below 90%.
You can modify the model training code and hyperparameters.
Do not change the Faster R-CNN architecture backbone.
Use the same dataset splits for training and validation.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torch.utils.data import DataLoader
from torchvision.datasets import VOCDetection
import torchvision.transforms as T

# Define transforms with data augmentation for training
train_transforms = T.Compose([
    T.RandomHorizontalFlip(0.5),
    T.ToTensor(),
])
val_transforms = T.ToTensor()

# Load dataset with transforms
train_dataset = VOCDetection('./data', year='2007', image_set='train', download=True, transforms=train_transforms)
val_dataset = VOCDetection('./data', year='2007', image_set='val', download=True, transforms=val_transforms)

train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True, collate_fn=lambda x: tuple(zip(*x)))
val_loader = DataLoader(val_dataset, batch_size=4, shuffle=False, collate_fn=lambda x: tuple(zip(*x)))

# Load pre-trained Faster R-CNN
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

# Replace the classifier with new one for VOC classes (20 classes + background)
num_classes = 21
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)

# VOC class mapping
CLASSES = [
    "__background__", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car",
    "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person",
    "pottedplant", "sheep", "sofa", "train", "tvmonitor"
]
class_to_idx = {c: i for i, c in enumerate(CLASSES)}

# Optimizer with weight decay
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)

# Learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)

num_epochs = 10

for epoch in range(num_epochs):
    model.train()
    for images, targets in train_loader:
        images = list(img.to(device) for img in images)
        # Prepare targets in expected format
        targets_formatted = []
        for t in targets:
            objs = t['annotation']['object']
            if not isinstance(objs, list):
                objs = [objs]
            boxes = []
            labels = []
            for obj in objs:
                bbox = obj['bndbox']
                xmin = float(bbox['xmin'])
                ymin = float(bbox['ymin'])
                xmax = float(bbox['xmax'])
                ymax = float(bbox['ymax'])
                boxes.append([xmin, ymin, xmax, ymax])
                label = class_to_idx[obj['name']]
                labels.append(label)
            boxes = torch.as_tensor(boxes, dtype=torch.float32).to(device)
            labels = torch.as_tensor(labels, dtype=torch.int64).to(device)
            targets_formatted.append({'boxes': boxes, 'labels': labels})

        loss_dict = model(images, targets_formatted)
        losses = sum(loss for loss in loss_dict.values())

        optimizer.zero_grad()
        losses.backward()
        optimizer.step()

    lr_scheduler.step()

# Note: For brevity, evaluation code is omitted but should compute mAP on val_loader
Added weight decay (0.0005) to the optimizer to reduce overfitting.
Added learning rate scheduler to reduce learning rate after 3 epochs.
Reduced number of epochs to 10 to avoid over-training.
Kept Faster R-CNN backbone unchanged as per constraints.
Added random horizontal flip data augmentation in training transforms.
Results Interpretation

Before: Training mAP = 85%, Validation mAP = 60% (overfitting)

After: Training mAP = 88%, Validation mAP = 77% (better generalization)

Adding weight decay and learning rate scheduling helps reduce overfitting, improving validation performance while maintaining good training accuracy.
Bonus Experiment
Try adding data augmentation techniques like random horizontal flips and color jitter to further improve validation mAP.
💡 Hint
Use torchvision.transforms to add augmentations in the training data pipeline and observe if validation accuracy improves.