0
0
PyTorchml~5 mins

Loss functions (MSELoss, CrossEntropyLoss) in PyTorch

Choose your learning style9 modes available
Introduction

Loss functions tell us how wrong our model's predictions are. They help the model learn by showing it what to fix.

When predicting continuous numbers like house prices (use MSELoss).
When classifying images into categories like cats or dogs (use CrossEntropyLoss).
When training a model to guess probabilities for multiple classes.
When you want to measure how close your predicted values are to real values.
When you want to improve your model step-by-step by minimizing errors.
Syntax
PyTorch
import torch.nn as nn

# For regression tasks (continuous values)
loss_fn = nn.MSELoss()

# For classification tasks (multiple classes)
loss_fn = nn.CrossEntropyLoss()

MSELoss stands for Mean Squared Error Loss. It measures the average squared difference between predicted and true values.

CrossEntropyLoss combines softmax and negative log likelihood. It is used for classification tasks with class labels.

Examples
This example calculates MSE loss between predicted and true continuous values.
PyTorch
import torch
import torch.nn as nn

loss_fn = nn.MSELoss()
predictions = torch.tensor([2.5, 0.0, 2.0, 8.0])
true_values = torch.tensor([3.0, -0.5, 2.0, 7.0])
loss = loss_fn(predictions, true_values)
print(loss.item())
This example calculates CrossEntropy loss for two samples with three classes each.
PyTorch
import torch
import torch.nn as nn

loss_fn = nn.CrossEntropyLoss()
predictions = torch.tensor([[1.0, 2.0, 0.5], [0.5, 0.2, 2.1]])  # raw scores (logits)
true_labels = torch.tensor([1, 2])  # class indices
loss = loss_fn(predictions, true_labels)
print(loss.item())
Sample Model

This program shows how to use MSELoss for a regression example and CrossEntropyLoss for a classification example. It prints the loss values.

PyTorch
import torch
import torch.nn as nn

# Example with MSELoss (regression)
loss_fn_mse = nn.MSELoss()
pred_mse = torch.tensor([1.5, 2.0, 3.5])
true_mse = torch.tensor([2.0, 2.0, 3.0])
loss_mse = loss_fn_mse(pred_mse, true_mse)
print(f"MSE Loss: {loss_mse.item():.4f}")

# Example with CrossEntropyLoss (classification)
loss_fn_ce = nn.CrossEntropyLoss()
pred_ce = torch.tensor([[2.0, 1.0, 0.1], [0.1, 0.2, 3.0]])  # logits
true_ce = torch.tensor([0, 2])  # class labels
loss_ce = loss_fn_ce(pred_ce, true_ce)
print(f"CrossEntropy Loss: {loss_ce.item():.4f}")
OutputSuccess
Important Notes

For CrossEntropyLoss, input predictions should be raw scores (logits), not probabilities.

MSELoss expects predictions and targets to have the same shape and be continuous values.

Loss values help guide the model to improve by minimizing them during training.

Summary

Loss functions measure how far predictions are from true values.

MSELoss is for continuous value prediction (regression).

CrossEntropyLoss is for classification with class labels.