0
0
PyTorchml~20 mins

Validation loop in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Validation loop
Problem:You have trained a PyTorch model on a training dataset, but you do not have a proper validation loop to check how well the model performs on unseen data during training.
Current Metrics:Training accuracy: 92%, Validation accuracy: Not measured
Issue:Without a validation loop, you cannot monitor if the model is overfitting or generalizing well. This can lead to poor model selection and unexpected results on new data.
Your Task
Implement a validation loop in PyTorch that evaluates the model on a validation dataset after each training epoch and reports validation loss and accuracy.
Use PyTorch framework only.
Do not change the model architecture or training hyperparameters.
Keep the training loop intact; only add the validation loop.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Sample dataset
X_train = torch.randn(1000, 20)
y_train = (torch.sum(X_train, dim=1) > 0).long()
X_val = torch.randn(200, 20)
y_val = (torch.sum(X_val, dim=1) > 0).long()

train_dataset = TensorDataset(X_train, y_train)
val_dataset = TensorDataset(X_val, y_val)

train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=32)

# Simple model
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(20, 2)
    def forward(self, x):
        return self.fc(x)

model = SimpleNet()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

epochs = 5

for epoch in range(1, epochs + 1):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item() * inputs.size(0)
        _, predicted = torch.max(outputs, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

    train_loss = running_loss / total
    train_acc = correct / total * 100

    # Validation loop
    model.eval()
    val_loss = 0.0
    val_correct = 0
    val_total = 0
    with torch.no_grad():
        for inputs, labels in val_loader:
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            val_loss += loss.item() * inputs.size(0)
            _, predicted = torch.max(outputs, 1)
            val_total += labels.size(0)
            val_correct += (predicted == labels).sum().item()

    val_loss /= val_total
    val_acc = val_correct / val_total * 100

    print(f"Epoch {epoch}: Train Loss={train_loss:.4f}, Train Acc={train_acc:.2f}%, Val Loss={val_loss:.4f}, Val Acc={val_acc:.2f}%")
    model.train()
Added a validation DataLoader for the validation dataset.
Implemented a validation loop inside the training epochs using model.eval() and torch.no_grad().
Calculated validation loss and accuracy after each epoch.
Printed validation metrics alongside training metrics.
Added model.train() after validation loop to switch back to training mode.
Results Interpretation

Before adding validation loop:
Training accuracy: 92%
Validation accuracy: Not measured

After adding validation loop:
Training accuracy: ~92%
Validation accuracy: ~90%

Adding a validation loop helps monitor the model's performance on unseen data during training. This is crucial to detect overfitting and to choose the best model.
Bonus Experiment
Try adding early stopping based on validation loss to stop training when validation loss stops improving.
💡 Hint
Keep track of the best validation loss and stop training if it does not improve for a set number of epochs.