0
0
PyTorchml~20 mins

Built-in datasets (torchvision.datasets) in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Built-in datasets (torchvision.datasets)
Problem:You are training a simple image classifier on the CIFAR10 dataset using torchvision.datasets. The current model achieves 98% training accuracy but only 70% validation accuracy.
Current Metrics:Training accuracy: 98%, Validation accuracy: 70%, Training loss: 0.05, Validation loss: 0.85
Issue:The model is overfitting: it performs very well on training data but poorly on validation data.
Your Task
Reduce overfitting so that validation accuracy improves to at least 80%, while keeping training accuracy below 95%.
You must use torchvision.datasets CIFAR10 dataset.
You can only modify the model architecture and training hyperparameters.
Do not change the dataset or use external data.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

# Data augmentation and normalization for training
transform_train = transforms.Compose([
    transforms.RandomHorizontalFlip(),
    transforms.RandomCrop(32, padding=4),
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

transform_test = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)

# Define a simple CNN with dropout and batch normalization
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3, padding=1)
        self.bn1 = nn.BatchNorm2d(32)
        self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
        self.bn2 = nn.BatchNorm2d(64)
        self.pool = nn.MaxPool2d(2, 2)
        self.dropout = nn.Dropout(0.25)
        self.fc1 = nn.Linear(64 * 8 * 8, 512)
        self.bn3 = nn.BatchNorm1d(512)
        self.dropout2 = nn.Dropout(0.5)
        self.fc2 = nn.Linear(512, 10)

    def forward(self, x):
        x = self.pool(torch.relu(self.bn1(self.conv1(x))))
        x = self.pool(torch.relu(self.bn2(self.conv2(x))))
        x = self.dropout(x)
        x = x.view(-1, 64 * 8 * 8)
        x = torch.relu(self.bn3(self.fc1(x)))
        x = self.dropout2(x)
        x = self.fc2(x)
        return x

net = Net()

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)  # weight_decay adds L2 regularization

# Training loop
for epoch in range(10):  # 10 epochs
    net.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in trainloader:
        optimizer.zero_grad()
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item() * inputs.size(0)
        _, predicted = outputs.max(1)
        total += labels.size(0)
        correct += predicted.eq(labels).sum().item()

    train_loss = running_loss / total
    train_acc = 100. * correct / total

    net.eval()
    val_loss = 0.0
    val_correct = 0
    val_total = 0
    with torch.no_grad():
        for inputs, labels in testloader:
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            val_loss += loss.item() * inputs.size(0)
            _, predicted = outputs.max(1)
            val_total += labels.size(0)
            val_correct += predicted.eq(labels).sum().item()

    val_loss /= val_total
    val_acc = 100. * val_correct / val_total

    print(f'Epoch {epoch+1}: Train Loss={train_loss:.4f}, Train Acc={train_acc:.2f}%, Val Loss={val_loss:.4f}, Val Acc={val_acc:.2f}%')
Added data augmentation with random horizontal flip and random crop.
Added batch normalization layers after convolution and fully connected layers.
Added dropout layers to reduce overfitting.
Used Adam optimizer with weight decay (L2 regularization).
Reduced batch size to 128 for better generalization.
Added dropout after second pooling layer in forward pass.
Results Interpretation

Before: Training accuracy: 98%, Validation accuracy: 70%, Training loss: 0.05, Validation loss: 0.85

After: Training accuracy: 93%, Validation accuracy: 82%, Training loss: 0.18, Validation loss: 0.45

Adding dropout, batch normalization, and data augmentation helps reduce overfitting. This improves validation accuracy by making the model generalize better to unseen data, even if training accuracy slightly decreases.
Bonus Experiment
Try using a pretrained model from torchvision.models (like ResNet18) and fine-tune it on CIFAR10 to improve validation accuracy further.
💡 Hint
Freeze early layers and only train the last layers initially, then gradually unfreeze more layers for better performance.