0
0
PyTorchml~20 mins

Why the training loop is explicit in PyTorch - Experiment to Prove It

Choose your learning style9 modes available
Experiment - Why the training loop is explicit in PyTorch
Problem:You have a simple neural network model in PyTorch that trains well but you want to understand why PyTorch requires you to write the training loop explicitly instead of hiding it.
Current Metrics:Training accuracy: 95%, Validation accuracy: 92%, Loss decreases smoothly.
Issue:The training loop is explicit and manual, which can be confusing for beginners used to frameworks that automate training.
Your Task
Explain and demonstrate why PyTorch uses an explicit training loop by modifying the loop to include a simple change and observe the effect.
Do not use high-level training APIs like PyTorch Lightning or fastai.
Keep the model and dataset simple (e.g., MNIST or a small synthetic dataset).
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Simple dataset: XOR problem
X = torch.tensor([[0,0],[0,1],[1,0],[1,1]], dtype=torch.float32)
Y = torch.tensor([[0],[1],[1],[0]], dtype=torch.float32)

dataset = TensorDataset(X, Y)
loader = DataLoader(dataset, batch_size=2, shuffle=True)

# Simple model
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(2, 4)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(4, 1)
        self.sigmoid = nn.Sigmoid()
    def forward(self, x):
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        x = self.sigmoid(x)
        return x

model = SimpleNet()
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

epochs = 10
for epoch in range(epochs):
    for batch_idx, (inputs, targets) in enumerate(loader):
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, targets)
        loss.backward()
        # Custom operation: clip gradients to max norm 1
        torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
        optimizer.step()
        print(f'Epoch {epoch+1}, Batch {batch_idx+1}, Loss: {loss.item():.4f}')

# After training, test model predictions
with torch.no_grad():
    preds = model(X)
    predicted = (preds > 0.5).float()
    print('Predictions:', predicted.squeeze().tolist())
Added explicit training loop with batch-wise loss printing.
Included manual gradient clipping inside the loop.
Demonstrated how explicit control allows adding custom steps easily.
Results Interpretation

Before: Training loop was implicit or hidden, making it unclear how batches and updates happen.
After: Explicit loop shows each batch's loss and allows adding custom steps like gradient clipping.

PyTorch uses an explicit training loop to give you full control over every step of training. This flexibility helps you customize training easily, which is why you write the loop yourself.
Bonus Experiment
Try adding a manual learning rate scheduler inside the training loop that reduces the learning rate every few epochs.
💡 Hint
Use optimizer.param_groups to adjust learning rate manually after each epoch.