0
0
PyTorchml~5 mins

Why the training loop is explicit in PyTorch

Choose your learning style9 modes available
Introduction

PyTorch makes you write the training loop yourself so you can see and control every step clearly. This helps you learn and customize your model easily.

When you want to understand how your model learns step-by-step.
When you need to customize training for special tasks or data.
When debugging your model to find and fix problems.
When experimenting with new ideas that need control over training.
When learning machine learning basics and want to see what happens inside.
Syntax
PyTorch
for epoch in range(num_epochs):
    for inputs, labels in dataloader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_function(outputs, labels)
        loss.backward()
        optimizer.step()

This loop runs over your data many times (epochs).

You calculate loss, then update model weights manually.

Examples
Basic training loop running 3 times over data.
PyTorch
for epoch in range(3):
    for x, y in train_loader:
        optimizer.zero_grad()
        y_pred = model(x)
        loss = loss_fn(y_pred, y)
        loss.backward()
        optimizer.step()
Same loop with different variable names and 5 epochs.
PyTorch
for epoch in range(5):
    for inputs, targets in data_loader:
        optimizer.zero_grad()
        predictions = model(inputs)
        loss = criterion(predictions, targets)
        loss.backward()
        optimizer.step()
Sample Model

This code trains a simple model to learn y = 2x + 1. The training loop is explicit so you see each step: zeroing gradients, forward pass, loss calculation, backward pass, and optimizer step. After training, it predicts the output for input 5.0.

PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Simple dataset: y = 2x + 1
x_train = torch.tensor([[1.0], [2.0], [3.0], [4.0]])
y_train = torch.tensor([[3.0], [5.0], [7.0], [9.0]])

dataset = TensorDataset(x_train, y_train)
dataloader = DataLoader(dataset, batch_size=2, shuffle=True)

# Simple linear model
model = nn.Linear(1, 1)

# Loss and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

num_epochs = 10

for epoch in range(num_epochs):
    total_loss = 0
    for inputs, labels in dataloader:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_fn(outputs, labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}")

# Test prediction
test_input = torch.tensor([[5.0]])
prediction = model(test_input).item()
print(f"Prediction for input 5.0: {prediction:.2f}")
OutputSuccess
Important Notes

Explicit loops help you understand what happens inside training.

You can change any step easily, like adding extra calculations.

Other libraries hide this loop, but PyTorch shows it for learning and flexibility.

Summary

PyTorch uses explicit training loops so you control every step.

This helps learning, debugging, and customizing models.

Writing the loop yourself shows how training really works.