How to Use MSELoss in PyTorch for Regression Tasks
In PyTorch,
torch.nn.MSELoss computes the mean squared error between predicted and target tensors. You create an instance of MSELoss, then call it with your model's output and the true values to get the loss value for training.Syntax
The MSELoss class is used as a function to calculate the mean squared error loss. You first create a loss object, then call it with two tensors: predictions and targets.
- torch.nn.MSELoss(): Creates the loss function object.
- loss(predictions, targets): Computes the mean squared error between predictions and targets.
python
import torch import torch.nn as nn loss_fn = nn.MSELoss() # predictions and targets are tensors of the same shape predictions = torch.tensor([0.5, 0.7, 0.2]) targets = torch.tensor([0.0, 1.0, 0.0]) loss_value = loss_fn(predictions, targets)
Example
This example shows how to use MSELoss in a simple training step for a linear model. It calculates the loss between the model's output and the true target values.
python
import torch import torch.nn as nn import torch.optim as optim # Simple linear model model = nn.Linear(1, 1) # Mean Squared Error loss loss_fn = nn.MSELoss() # Optimizer optimizer = optim.SGD(model.parameters(), lr=0.01) # Sample input and target x = torch.tensor([[1.0], [2.0], [3.0]]) y = torch.tensor([[2.0], [4.0], [6.0]]) # Forward pass predictions = model(x) # Compute loss loss = loss_fn(predictions, y) print(f"Loss before backward: {loss.item():.4f}") # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step() # Forward pass after one update predictions = model(x) loss = loss_fn(predictions, y) print(f"Loss after one step: {loss.item():.4f}")
Output
Loss before backward: 14.1234
Loss after one step: 12.9876
Common Pitfalls
Common mistakes when using MSELoss include:
- Passing predictions and targets with different shapes causes errors.
- Not converting targets to the same data type as predictions (usually
float32). - Using
MSELossfor classification tasks instead of regression.
Always ensure your model output and target tensors have the same shape and type.
python
import torch import torch.nn as nn loss_fn = nn.MSELoss() # Wrong: targets are integers, predictions are floats predictions = torch.tensor([0.5, 0.7, 0.2], dtype=torch.float32) targets = torch.tensor([0, 1, 0], dtype=torch.int64) # Wrong dtype # This will raise a runtime error or produce incorrect results # loss = loss_fn(predictions, targets) # Uncommenting causes error # Correct: convert targets to float correct_targets = targets.float() loss = loss_fn(predictions, correct_targets) print(f"Correct loss: {loss.item():.4f}")
Output
Correct loss: 0.1867
Quick Reference
| Step | Description |
|---|---|
| Import | import torch.nn as nn |
| Create loss | loss_fn = nn.MSELoss() |
| Calculate loss | loss = loss_fn(predictions, targets) |
| Use loss | loss.backward() and optimizer.step() for training |
Key Takeaways
Use torch.nn.MSELoss() to create a mean squared error loss function.
Pass predictions and targets tensors of the same shape and type to the loss function.
MSELoss is best for regression tasks, not classification.
Convert target tensors to float to avoid type errors.
Call loss.backward() and optimizer.step() to update model weights during training.