0
0
PyTorchml~5 mins

Validation loop in PyTorch

Choose your learning style9 modes available
Introduction

A validation loop checks how well a trained model works on new data it hasn't seen before. It helps us know if the model is learning correctly or just memorizing.

After training a model for some time to see if it is improving.
To compare different models and pick the best one.
To avoid overfitting by checking performance on unseen data.
When tuning model settings to find the best parameters.
Before using the model in real life to ensure it works well.
Syntax
PyTorch
model.eval()
with torch.no_grad():
    for inputs, labels in validation_loader:
        outputs = model(inputs)
        loss = loss_function(outputs, labels)
        # calculate metrics like accuracy
        # accumulate results for reporting

Use model.eval() to set the model to evaluation mode.

Wrap the loop with torch.no_grad() to save memory and speed up computation.

Examples
Basic validation loop calculating loss on validation data.
PyTorch
model.eval()
with torch.no_grad():
    for inputs, labels in val_loader:
        outputs = model(inputs)
        loss = criterion(outputs, labels)
Validation loop calculating accuracy of the model.
PyTorch
model.eval()
with torch.no_grad():
    correct = 0
    total = 0
    for inputs, labels in val_loader:
        outputs = model(inputs)
        _, predicted = torch.max(outputs, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    accuracy = correct / total
Sample Model

This program trains a simple model on random data and then runs a validation loop to check loss and accuracy on validation data after each epoch.

PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Create simple dataset
x_train = torch.randn(100, 5)
y_train = (x_train.sum(dim=1) > 0).long()
x_val = torch.randn(30, 5)
y_val = (x_val.sum(dim=1) > 0).long()

train_dataset = TensorDataset(x_train, y_train)
val_dataset = TensorDataset(x_val, y_val)

train_loader = DataLoader(train_dataset, batch_size=10)
val_loader = DataLoader(val_dataset, batch_size=10)

# Simple model
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(5, 2)
    def forward(self, x):
        return self.linear(x)

model = SimpleModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# Train for 3 epochs
for epoch in range(3):
    model.train()
    for inputs, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

    # Validation loop
    model.eval()
    val_loss = 0.0
    correct = 0
    total = 0
    with torch.no_grad():
        for inputs, labels in val_loader:
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            val_loss += loss.item() * inputs.size(0)
            _, predicted = torch.max(outputs, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    val_loss /= total
    accuracy = correct / total
    print(f"Epoch {epoch+1}: Validation Loss = {val_loss:.4f}, Accuracy = {accuracy:.4f}")
OutputSuccess
Important Notes

Always use model.eval() before validation to turn off training-only features like dropout.

Use torch.no_grad() during validation to save memory and speed up calculations.

Calculate metrics like accuracy or loss to understand model performance on validation data.

Summary

A validation loop tests the model on new data to check its performance.

Use model.eval() and torch.no_grad() to run validation properly.

Track metrics like loss and accuracy to see how well the model is doing.