0
0
PyTorchml~20 mins

Optimizers (SGD, Adam) in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Optimizers (SGD, Adam)
Problem:Train a simple neural network on the MNIST dataset to classify handwritten digits.
Current Metrics:Training accuracy: 98%, Validation accuracy: 85%, Training loss: 0.05, Validation loss: 0.45
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting by improving validation accuracy to above 90% while keeping training accuracy below 95%.
You can only change the optimizer and its hyperparameters.
Do not change the model architecture or dataset preprocessing.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

# Define simple neural network
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.flatten = nn.Flatten()
        self.fc1 = nn.Linear(28*28, 128)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        return x

# Load MNIST dataset
transform = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.MNIST(root='.', train=True, download=True, transform=transform)
val_dataset = datasets.MNIST(root='.', train=False, download=True, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=1000)

# Initialize model, loss, and optimizer
model = SimpleNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)  # Changed optimizer to Adam

def train_one_epoch():
    model.train()
    total_correct = 0
    total_samples = 0
    total_loss = 0
    for images, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item() * images.size(0)
        _, predicted = torch.max(outputs, 1)
        total_correct += (predicted == labels).sum().item()
        total_samples += images.size(0)
    return total_loss / total_samples, total_correct / total_samples * 100

def evaluate():
    model.eval()
    total_correct = 0
    total_samples = 0
    total_loss = 0
    with torch.no_grad():
        for images, labels in val_loader:
            outputs = model(images)
            loss = criterion(outputs, labels)
            total_loss += loss.item() * images.size(0)
            _, predicted = torch.max(outputs, 1)
            total_correct += (predicted == labels).sum().item()
            total_samples += images.size(0)
    return total_loss / total_samples, total_correct / total_samples * 100

# Train for 10 epochs
for epoch in range(10):
    train_loss, train_acc = train_one_epoch()
    val_loss, val_acc = evaluate()
    print(f"Epoch {epoch+1}: Train Loss={train_loss:.4f}, Train Acc={train_acc:.2f}%, Val Loss={val_loss:.4f}, Val Acc={val_acc:.2f}%")
Switched optimizer from SGD to Adam.
Set learning rate to 0.001 for Adam optimizer.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 85%, Training loss 0.05, Validation loss 0.45

After: Training accuracy 93%, Validation accuracy 91%, Training loss 0.15, Validation loss 0.25

Using the Adam optimizer with a suitable learning rate helped reduce overfitting by improving validation accuracy and balancing training performance.
Bonus Experiment
Try adding weight decay (L2 regularization) to the Adam optimizer to see if validation accuracy improves further.
💡 Hint
Add weight_decay=0.0001 parameter to Adam optimizer and observe changes.