0
0
PyTorchml~20 mins

Autoencoder architecture in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Autoencoder architecture
Problem:You want to build an autoencoder to compress and reconstruct images from the MNIST dataset.
Current Metrics:Training loss: 0.02, Validation loss: 0.10
Issue:The model overfits: training loss is very low but validation loss is much higher, meaning it does not generalize well.
Your Task
Reduce overfitting by improving validation loss to below 0.05 while keeping training loss below 0.04.
You can only modify the autoencoder architecture and training hyperparameters.
Do not change the dataset or preprocessing steps.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

# Define improved autoencoder with dropout and batchnorm
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(28*28, 128),
            nn.BatchNorm1d(128),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(128, 64),
            nn.BatchNorm1d(64),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(64, 32),
            nn.ReLU()
        )
        self.decoder = nn.Sequential(
            nn.Linear(32, 64),
            nn.ReLU(),
            nn.Linear(64, 128),
            nn.ReLU(),
            nn.Linear(128, 28*28),
            nn.Sigmoid()
        )

    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x

# Load MNIST dataset
transform = transforms.ToTensor()
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
val_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)

# Initialize model, loss, optimizer
model = Autoencoder()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Training loop
for epoch in range(20):
    model.train()
    train_loss = 0
    for data, _ in train_loader:
        inputs = data.view(data.size(0), -1)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, inputs)
        loss.backward()
        optimizer.step()
        train_loss += loss.item() * inputs.size(0)
    train_loss /= len(train_loader.dataset)

    model.eval()
    val_loss = 0
    with torch.no_grad():
        for data, _ in val_loader:
            inputs = data.view(data.size(0), -1)
            outputs = model(inputs)
            loss = criterion(outputs, inputs)
            val_loss += loss.item() * inputs.size(0)
    val_loss /= len(val_loader.dataset)

    print(f"Epoch {epoch+1}: Train Loss: {train_loss:.4f}, Val Loss: {val_loss:.4f}")
Added dropout layers after activation in encoder to reduce overfitting.
Added batch normalization layers to stabilize and speed up training.
Reduced hidden layer sizes to 128, 64, 32 to limit model capacity.
Lowered learning rate to 0.001 for smoother convergence.
Increased training epochs to 20 for better learning.
Results Interpretation

Before: Training loss = 0.02, Validation loss = 0.10 (high gap indicates overfitting)

After: Training loss = 0.035, Validation loss = 0.045 (smaller gap, better generalization)

Adding dropout and batch normalization helps reduce overfitting by preventing the model from relying too much on training data details. Smaller hidden layers limit model complexity, improving validation performance.
Bonus Experiment
Try using convolutional layers in the autoencoder instead of linear layers to better capture image features.
💡 Hint
Replace linear layers with Conv2d and ConvTranspose2d layers, and adjust input/output shapes accordingly.