0
0
PyTorchml~5 mins

Training and validation loss tracking in PyTorch

Choose your learning style9 modes available
Introduction

We track training and validation loss to see how well our model learns and if it works well on new data.

When training a model to check if it is improving over time.
To detect if the model is overfitting by comparing training and validation loss.
When tuning model settings to find the best performance.
To decide when to stop training if the model stops improving.
To visualize learning progress with graphs.
Syntax
PyTorch
for epoch in range(num_epochs):
    model.train()
    train_loss = 0
    for inputs, targets in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_function(outputs, targets)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()
    train_loss /= len(train_loader)

    model.eval()
    val_loss = 0
    with torch.no_grad():
        for inputs, targets in val_loader:
            outputs = model(inputs)
            loss = loss_function(outputs, targets)
            val_loss += loss.item()
    val_loss /= len(val_loader)

    print(f"Epoch {epoch+1}, Training Loss: {train_loss:.4f}, Validation Loss: {val_loss:.4f}")

Use model.train() before training and model.eval() before validation to set correct modes.

Use torch.no_grad() during validation to save memory and computation.

Examples
This calculates average training loss for one epoch.
PyTorch
train_loss = 0
for inputs, targets in train_loader:
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
    loss.backward()
    optimizer.step()
    train_loss += loss.item()
train_loss /= len(train_loader)
This calculates average validation loss without updating model weights.
PyTorch
val_loss = 0
with torch.no_grad():
    for inputs, targets in val_loader:
        outputs = model(inputs)
        loss = loss_function(outputs, targets)
        val_loss += loss.item()
val_loss /= len(val_loader)
Sample Model

This program trains a simple linear model to fit y = 2x + 1 with some noise. It prints training and validation loss for 5 epochs.

PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Simple dataset: y = 2x + 1
x_train = torch.linspace(0, 1, 100).unsqueeze(1)
y_train = 2 * x_train + 1 + 0.1 * torch.randn_like(x_train)

x_val = torch.linspace(0, 1, 20).unsqueeze(1)
y_val = 2 * x_val + 1 + 0.1 * torch.randn_like(x_val)

train_dataset = TensorDataset(x_train, y_train)
val_dataset = TensorDataset(x_val, y_val)

train_loader = DataLoader(train_dataset, batch_size=10)
val_loader = DataLoader(val_dataset, batch_size=5)

# Simple linear model
model = nn.Linear(1, 1)

loss_function = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

num_epochs = 5

for epoch in range(num_epochs):
    model.train()
    train_loss = 0
    for inputs, targets in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_function(outputs, targets)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()
    train_loss /= len(train_loader)

    model.eval()
    val_loss = 0
    with torch.no_grad():
        for inputs, targets in val_loader:
            outputs = model(inputs)
            loss = loss_function(outputs, targets)
            val_loss += loss.item()
    val_loss /= len(val_loader)

    print(f"Epoch {epoch+1}, Training Loss: {train_loss:.4f}, Validation Loss: {val_loss:.4f}")
OutputSuccess
Important Notes

Training loss usually decreases every epoch.

If validation loss starts increasing while training loss decreases, the model may be overfitting.

Use small batches to get smoother loss estimates.

Summary

Track training and validation loss to monitor model learning and generalization.

Use model.train() and model.eval() to switch modes correctly.

Calculate average loss over batches for clear progress tracking.