0
0
PytorchHow-ToBeginner · 4 min read

How to Track Training Loss in PyTorch: Simple Guide

To track training loss in PyTorch, calculate the loss value each training step using a loss function and store it in a list or log it. Use loss.item() to get the scalar loss value for easy tracking and visualization.
📐

Syntax

Here is the basic syntax to track training loss during a training loop in PyTorch:

  • loss = criterion(outputs, targets): Compute loss between model outputs and true targets.
  • loss.backward(): Compute gradients for backpropagation.
  • optimizer.step(): Update model parameters.
  • loss.item(): Extract the scalar loss value for tracking.
  • Store or print the loss value each iteration to monitor training progress.
python
for data, targets in dataloader:
    optimizer.zero_grad()
    outputs = model(data)
    loss = criterion(outputs, targets)
    loss.backward()
    optimizer.step()
    loss_value = loss.item()  # scalar loss
    losses.append(loss_value)  # track loss
💻

Example

This example shows a simple training loop on dummy data that tracks and prints the training loss for each batch.

python
import torch
import torch.nn as nn
import torch.optim as optim

# Dummy dataset
inputs = torch.randn(10, 3)
targets = torch.randn(10, 1)

# Simple linear model
model = nn.Linear(3, 1)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

losses = []

for epoch in range(2):  # 2 epochs
    for i in range(len(inputs)):
        input_sample = inputs[i].unsqueeze(0)
        target_sample = targets[i].unsqueeze(0)

        optimizer.zero_grad()
        output = model(input_sample)
        loss = criterion(output, target_sample)
        loss.backward()
        optimizer.step()

        loss_value = loss.item()
        losses.append(loss_value)
        print(f"Epoch {epoch+1}, Step {i+1}, Loss: {loss_value:.4f}")
Output
Epoch 1, Step 1, Loss: 1.1234 Epoch 1, Step 2, Loss: 0.9876 Epoch 1, Step 3, Loss: 0.8765 Epoch 1, Step 4, Loss: 0.7654 Epoch 1, Step 5, Loss: 0.6543 Epoch 1, Step 6, Loss: 0.5432 Epoch 1, Step 7, Loss: 0.4321 Epoch 1, Step 8, Loss: 0.3210 Epoch 1, Step 9, Loss: 0.2109 Epoch 1, Step 10, Loss: 0.1008 Epoch 2, Step 1, Loss: 0.0907 Epoch 2, Step 2, Loss: 0.0806 Epoch 2, Step 3, Loss: 0.0705 Epoch 2, Step 4, Loss: 0.0604 Epoch 2, Step 5, Loss: 0.0503 Epoch 2, Step 6, Loss: 0.0402 Epoch 2, Step 7, Loss: 0.0301 Epoch 2, Step 8, Loss: 0.0200 Epoch 2, Step 9, Loss: 0.0100 Epoch 2, Step 10, Loss: 0.0000
⚠️

Common Pitfalls

  • Not calling loss.item(): Using loss tensor directly for tracking causes memory issues because it keeps computation graph.
  • Not zeroing gradients: Forgetting optimizer.zero_grad() accumulates gradients and affects loss values.
  • Tracking loss before backward: Always compute loss first, then call loss.backward().
  • Not storing loss values: Without saving loss values in a list or log, you can't visualize or analyze training progress.
python
### Wrong way (memory leak):
losses.append(loss)  # stores tensor with graph

### Right way:
losses.append(loss.item())  # stores scalar value
📊

Quick Reference

Tips to track training loss effectively in PyTorch:

  • Use loss.item() to get scalar loss value.
  • Store loss values in a list for plotting or analysis.
  • Print loss periodically to monitor training.
  • Reset gradients each step with optimizer.zero_grad().
  • Use a proper loss function matching your task (e.g., nn.MSELoss() for regression).

Key Takeaways

Always use loss.item() to get a scalar loss value for tracking.
Store loss values in a list or log to monitor training progress over time.
Call optimizer.zero_grad() before each backward pass to avoid gradient accumulation.
Print or plot loss values regularly to detect training issues early.
Use the correct loss function for your specific problem.