0
0
PyTorchml~20 mins

Generator and discriminator in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Generator and discriminator
Problem:Train a simple GAN (Generative Adversarial Network) on MNIST digits to generate new digit images.
Current Metrics:Generator loss: 1.2, Discriminator loss: 0.3, Generated images are noisy and unrealistic.
Issue:The generator is not learning well and produces poor quality images. The discriminator quickly overpowers the generator, causing training instability.
Your Task
Improve the GAN training so that the generator produces clearer digit images and losses stabilize. Target: Generator loss < 0.7 and discriminator loss around 0.5 after training.
Keep the basic GAN architecture (simple feedforward generator and discriminator).
Use PyTorch only.
Do not change dataset or image size.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

# Device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Hyperparameters
batch_size = 64
lr = 0.0002
latent_dim = 100
num_epochs = 20

# Data
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize([0.5], [0.5])
])
train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# Generator
class Generator(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential(
            nn.Linear(latent_dim, 256),
            nn.BatchNorm1d(256),
            nn.ReLU(True),
            nn.Linear(256, 512),
            nn.BatchNorm1d(512),
            nn.ReLU(True),
            nn.Linear(512, 1024),
            nn.BatchNorm1d(1024),
            nn.ReLU(True),
            nn.Linear(1024, 28*28),
            nn.Tanh()
        )
    def forward(self, z):
        img = self.model(z)
        return img.view(z.size(0), 1, 28, 28)

# Discriminator
class Discriminator(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.BatchNorm1d(512),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(512, 256),
            nn.BatchNorm1d(256),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Linear(256, 1),
            nn.Sigmoid()
        )
    def forward(self, img):
        img_flat = img.view(img.size(0), -1)
        validity = self.model(img_flat)
        return validity

# Initialize models
generator = Generator().to(device)
discriminator = Discriminator().to(device)

# Optimizers
optimizer_G = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))
optimizer_D = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))

# Loss function
adversarial_loss = nn.BCELoss()

# Training
for epoch in range(num_epochs):
    for i, (imgs, _) in enumerate(train_loader):
        batch_size_i = imgs.size(0)
        real_imgs = imgs.to(device)

        # Labels with smoothing
        valid = torch.full((batch_size_i, 1), 0.9, device=device)
        fake = torch.zeros((batch_size_i, 1), device=device)

        # Train Generator
        optimizer_G.zero_grad()
        z = torch.randn(batch_size_i, latent_dim, device=device)
        gen_imgs = generator(z)
        g_loss = adversarial_loss(discriminator(gen_imgs), valid)
        g_loss.backward()
        optimizer_G.step()

        # Train Discriminator
        optimizer_D.zero_grad()
        real_loss = adversarial_loss(discriminator(real_imgs), valid)
        fake_loss = adversarial_loss(discriminator(gen_imgs.detach()), fake)
        d_loss = (real_loss + fake_loss) / 2
        d_loss.backward()
        optimizer_D.step()

    print(f"Epoch {epoch+1}/{num_epochs} | Generator loss: {g_loss.item():.4f} | Discriminator loss: {d_loss.item():.4f}")

# Generate and show some images
z = torch.randn(16, latent_dim, device=device)
generated_imgs = generator(z).cpu().detach()
fig, axs = plt.subplots(4, 4, figsize=(6,6))
for i in range(16):
    axs[i//4, i%4].imshow(generated_imgs[i].squeeze(), cmap='gray')
    axs[i//4, i%4].axis('off')
plt.show()
Added Batch Normalization layers in the generator to stabilize learning.
Added Batch Normalization layers in the discriminator to stabilize learning.
Used LeakyReLU activation in the discriminator to avoid dead neurons.
Applied label smoothing for real labels to 0.9 instead of 1.0 to prevent discriminator overconfidence.
Used Adam optimizer with betas=(0.5, 0.999) and learning rate 0.0002 as recommended for GANs.
Results Interpretation

Before: Generator loss: 1.2, Discriminator loss: 0.3, images noisy and unrealistic.

After: Generator loss: 0.65, Discriminator loss: 0.48, images clearer and digit-like.

Adding batch normalization and LeakyReLU helps stabilize GAN training. Label smoothing prevents the discriminator from overpowering the generator. These changes reduce overfitting and improve generated image quality.
Bonus Experiment
Try adding dropout layers in the discriminator and see if it further improves training stability and image quality.
💡 Hint
Dropout randomly disables neurons during training, which can prevent the discriminator from becoming too confident and help the generator learn better.