0
0
Computer Visionml~20 mins

Random erasing in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Random erasing
Problem:You are training an image classifier on a small dataset. The model achieves 95% training accuracy but only 70% validation accuracy.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.85
Issue:The model is overfitting. It performs very well on training data but poorly on unseen validation data.
Your Task
Reduce overfitting by applying random erasing data augmentation to improve validation accuracy to at least 80% while keeping training accuracy below 92%.
You can only add random erasing augmentation to the training data pipeline.
Do not change the model architecture or optimizer settings.
Keep the number of training epochs and batch size the same.
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import torch
import torchvision
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.optim as optim

# Define transforms with random erasing for training
train_transforms = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.ToTensor(),
    transforms.RandomErasing(p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value='random'),
])

val_transforms = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.ToTensor(),
])

# Load dataset (e.g., CIFAR10 for example)
train_dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transforms)
val_dataset = datasets.CIFAR10(root='./data', train=False, download=True, transform=val_transforms)

train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)

# Simple CNN model
class SimpleCNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 32, 3, padding=1)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
        self.fc1 = nn.Linear(64 * 8 * 8, 128)
        self.fc2 = nn.Linear(128, 10)
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.pool(self.relu(self.conv1(x)))
        x = self.pool(self.relu(self.conv2(x)))
        x = x.view(-1, 64 * 8 * 8)
        x = self.relu(self.fc1(x))
        x = self.fc2(x)
        return x

# Initialize model, loss, optimizer
model = SimpleCNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training loop
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item() * inputs.size(0)
        _, predicted = outputs.max(1)
        total += labels.size(0)
        correct += predicted.eq(labels).sum().item()

    train_loss = running_loss / total
    train_acc = 100. * correct / total

    model.eval()
    val_loss = 0.0
    val_correct = 0
    val_total = 0
    with torch.no_grad():
        for inputs, labels in val_loader:
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            val_loss += loss.item() * inputs.size(0)
            _, predicted = outputs.max(1)
            val_total += labels.size(0)
            val_correct += predicted.eq(labels).sum().item()

    val_loss /= val_total
    val_acc = 100. * val_correct / val_total

    print(f'Epoch {epoch+1}/{num_epochs} - Train loss: {train_loss:.3f}, Train acc: {train_acc:.2f}%, Val loss: {val_loss:.3f}, Val acc: {val_acc:.2f}%')
Added RandomErasing transform with 50% probability to the training data pipeline.
Kept model architecture, optimizer, batch size, and epochs unchanged.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, Training loss 0.15, Validation loss 0.85

After: Training accuracy 90%, Validation accuracy 82%, Training loss 0.30, Validation loss 0.60

Random erasing helps reduce overfitting by making the model see images with random parts erased during training. This forces the model to learn more robust features, improving validation accuracy while slightly lowering training accuracy.
Bonus Experiment
Try combining random erasing with other augmentations like random rotation and color jitter to see if validation accuracy improves further.
💡 Hint
Add transforms.RandomRotation and transforms.ColorJitter before RandomErasing in the training pipeline.