0
0
PyTorchml~5 mins

Training loop structure in PyTorch

Choose your learning style9 modes available
Introduction

A training loop helps a model learn from data by repeating steps many times. It updates the model to make better predictions.

When you want to teach a model to recognize images.
When you need to improve a model's predictions on new data.
When training a model to understand text or speech.
When adjusting model weights to reduce errors.
When running experiments to compare different models.
Syntax
PyTorch
for epoch in range(num_epochs):
    for inputs, labels in dataloader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_function(outputs, labels)
        loss.backward()
        optimizer.step()

The outer loop runs over epochs (full passes over data).

The inner loop runs over batches of data for efficiency.

Examples
Basic training loop running 5 epochs over batches from train_loader.
PyTorch
for epoch in range(5):
    for inputs, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
Training loop with loss tracking and printing average loss per epoch.
PyTorch
for epoch in range(3):
    total_loss = 0
    for inputs, labels in train_loader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    print(f"Epoch {epoch+1}, Loss: {total_loss/len(train_loader):.4f}")
Sample Model

This program trains a simple linear model on random data for 3 epochs. It prints the average loss after each epoch to show learning progress.

PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Simple dataset: inputs and labels
inputs = torch.randn(100, 3)
labels = torch.randn(100, 1)

dataset = TensorDataset(inputs, labels)
train_loader = DataLoader(dataset, batch_size=10)

# Simple model: one linear layer
model = nn.Linear(3, 1)

# Loss function and optimizer
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

num_epochs = 3

for epoch in range(num_epochs):
    total_loss = 0
    for batch_inputs, batch_labels in train_loader:
        optimizer.zero_grad()
        outputs = model(batch_inputs)
        loss = criterion(outputs, batch_labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    avg_loss = total_loss / len(train_loader)
    print(f"Epoch {epoch+1}, Loss: {avg_loss:.4f}")
OutputSuccess
Important Notes

Always call optimizer.zero_grad() before loss.backward() to reset gradients.

Use batches to speed up training and use less memory.

Printing loss helps see if the model is learning or stuck.

Summary

A training loop repeats over data to improve the model.

It has two loops: epochs and batches.

Key steps: zero gradients, forward pass, compute loss, backward pass, update weights.