How to Use NLLLoss in PyTorch: Syntax and Example
Use
torch.nn.NLLLoss to compute the negative log likelihood loss for classification tasks in PyTorch. It expects log-probabilities as input and target class indices as labels. Apply torch.log_softmax to your model outputs before passing them to NLLLoss.Syntax
The NLLLoss class computes the negative log likelihood loss. It is used for classification problems where the model outputs log-probabilities.
input: Tensor of shape (N, C) with log-probabilities for each class.target: Tensor of shape (N) with class indices (0 to C-1).weight(optional): a manual rescaling weight given to each class.reduction: specifies how to reduce the loss ('mean', 'sum', or 'none').
python
loss = torch.nn.NLLLoss(weight=None, reduction='mean') output = torch.log_softmax(model_output, dim=1) loss_value = loss(output, target)
Example
This example shows how to use NLLLoss with a simple tensor input and target labels. It demonstrates applying log_softmax to model outputs before calculating the loss.
python
import torch import torch.nn as nn # Sample model output (logits) for 3 samples and 4 classes logits = torch.tensor([[2.0, 0.5, 0.3, 0.1], [0.1, 0.2, 0.3, 2.1], [1.0, 2.0, 0.1, 0.2]]) # Target class indices for each sample targets = torch.tensor([0, 3, 1]) # Create NLLLoss instance loss_fn = nn.NLLLoss() # Apply log_softmax to logits to get log-probabilities log_probs = torch.log_softmax(logits, dim=1) # Calculate loss loss = loss_fn(log_probs, targets) print(f"Loss: {loss.item():.4f}")
Output
Loss: 0.4170
Common Pitfalls
Common mistakes when using NLLLoss include:
- Passing raw logits directly without applying
log_softmax.NLLLossexpects log-probabilities, not raw scores. - Using one-hot encoded targets instead of class indices. Targets must be class indices (integers).
- Mismatching input and target shapes.
Here is a wrong and right usage example:
python
# Wrong: passing raw logits directly import torch import torch.nn as nn logits = torch.tensor([[2.0, 0.5], [0.1, 0.2]]) targets = torch.tensor([0, 1]) loss_fn = nn.NLLLoss() try: loss_wrong = loss_fn(logits, targets) except Exception as e: print(f"Error: {e}") # Right: apply log_softmax first log_probs = torch.log_softmax(logits, dim=1) loss_right = loss_fn(log_probs, targets) print(f"Correct loss: {loss_right.item():.4f}")
Output
Error: Expected input to be a tensor of log probabilities (log_softmax output), but got raw logits.
Correct loss: 0.7136
Quick Reference
Summary tips for using NLLLoss:
- Always apply
torch.log_softmaxto model outputs before passing toNLLLoss. - Targets must be class indices, not one-hot vectors.
- Use
reduction='mean'for average loss or'sum'to sum losses. - Use
weightto handle class imbalance by assigning weights per class.
Key Takeaways
NLLLoss expects log-probabilities as input, so apply torch.log_softmax to model outputs first.
Targets must be integer class indices, not one-hot encoded vectors.
Use the reduction parameter to control how loss values are combined.
Class weights can be used to balance training on imbalanced datasets.
Passing raw logits directly to NLLLoss causes errors or incorrect training.