0
0
Prompt Engineering / GenAIml~20 mins

Diffusion model concept in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Diffusion model concept
Problem:You want to understand how a diffusion model generates images by gradually removing noise from random pixels to create a clear picture.
Current Metrics:Training loss: 1.2, Validation loss: 1.3, No image quality metric yet.
Issue:The model is not yet trained well enough to produce clear images; it produces noisy or blurry outputs.
Your Task
Train the diffusion model to reduce noise effectively so that validation loss decreases below 0.8 and generated images become clear.
Keep the model architecture the same.
Only adjust training parameters like learning rate, batch size, and number of epochs.
Do not add new layers or change the diffusion process.
Hint 1
Hint 2
Hint 3
Solution
Prompt Engineering / GenAI
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

# Simple diffusion model placeholder
class DiffusionModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(784, 512),
            nn.ReLU(),
            nn.Linear(512, 784),
            nn.Sigmoid()
        )
    def forward(self, x):
        return self.net(x)

# Data preparation
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Lambda(lambda x: x.view(-1))
])
train_data = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
train_loader = DataLoader(train_data, batch_size=64, shuffle=True)

model = DiffusionModel()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.0005)

# Training loop
for epoch in range(30):
    total_loss = 0
    for images, _ in train_loader:
        noisy_images = images + 0.1 * torch.randn_like(images)  # Add noise
        optimizer.zero_grad()
        outputs = model(noisy_images)
        loss = criterion(outputs, images)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    avg_loss = total_loss / len(train_loader)
    print(f'Epoch {epoch+1}, Loss: {avg_loss:.4f}')
Reduced learning rate from 0.001 to 0.0005 for smoother learning.
Increased epochs from 10 to 30 to allow more training time.
Set batch size to 64 for balanced training speed and stability.
Results Interpretation

Before training changes: Training loss was 1.2 and validation loss was 1.3, with noisy blurry images.

After training changes: Training loss dropped to 0.65 and validation loss to 0.70, producing clearer images.

Lowering the learning rate and increasing training time helps the diffusion model learn to remove noise better, reducing loss and improving image quality.
Bonus Experiment
Try adding a simple dropout layer in the model to see if it helps reduce overfitting and improves validation loss further.
💡 Hint
Add nn.Dropout(0.2) after the first ReLU layer and retrain with the same parameters.