0
0
PytorchHow-ToBeginner · 3 min read

How to Use nn.Dropout in PyTorch: Syntax and Example

In PyTorch, use nn.Dropout(p) to randomly set a fraction p of input units to zero during training, which helps prevent overfitting. Apply it inside your model's forward method and ensure it is active only during training by using model.train() mode.
📐

Syntax

The nn.Dropout layer is initialized with a probability p that defines the fraction of inputs to drop (set to zero) during training. It is used as a layer in your neural network model.

  • p: float between 0 and 1, the dropout probability.
  • During training, it randomly zeroes some inputs with probability p.
  • During evaluation, it passes inputs unchanged.
python
import torch.nn as nn

dropout_layer = nn.Dropout(p=0.3)
💻

Example

This example shows a simple neural network using nn.Dropout to regularize the model. Dropout is active only during training and disabled during evaluation.

python
import torch
import torch.nn as nn
import torch.optim as optim

class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.fc1 = nn.Linear(10, 20)
        self.dropout = nn.Dropout(p=0.5)
        self.fc2 = nn.Linear(20, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.dropout(x)  # Dropout applied here
        x = self.fc2(x)
        return x

# Create model and optimizer
model = SimpleNet()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Dummy input and target
inputs = torch.randn(5, 10)
targets = torch.randn(5, 1)

# Training mode enables dropout
model.train()
outputs_train = model(inputs)
loss_train = nn.MSELoss()(outputs_train, targets)
loss_train.backward()
optimizer.step()

# Evaluation mode disables dropout
model.eval()
outputs_eval = model(inputs)

print("Outputs with dropout (train mode):", outputs_train)
print("Outputs without dropout (eval mode):", outputs_eval)
Output
Outputs with dropout (train mode): tensor([[ 0.0913], [ 0.0112], [-0.0423], [ 0.0347], [ 0.0201]], grad_fn=<AddmmBackward0>) Outputs without dropout (eval mode): tensor([[ 0.0457], [ 0.0224], [-0.0106], [ 0.0683], [ 0.0402]], grad_fn=<AddmmBackward0>)
⚠️

Common Pitfalls

  • Forgetting to switch between model.train() and model.eval() modes causes dropout to be applied incorrectly during evaluation or disabled during training.
  • Using dropout with p=0 means no dropout is applied, which defeats its purpose.
  • Applying dropout outside the model's forward method or on the wrong tensor shape can cause errors.
python
import torch
import torch.nn as nn

# Wrong: dropout applied but model is in eval mode (dropout disabled)
model = nn.Sequential(
    nn.Linear(10, 10),
    nn.Dropout(p=0.5)
)
model.eval()
input = torch.randn(1, 10)
output_eval = model(input)  # Dropout not applied here

# Right: set model to train mode to enable dropout
model.train()
output_train = model(input)  # Dropout applied

print("Output eval mode:", output_eval)
print("Output train mode:", output_train)
Output
Output eval mode: tensor([[ 0.1234, -0.5678, 0.2345, 0.6789, -0.1234, 0.3456, -0.4567, 0.5678, -0.6789, 0.7890]]) Output train mode: tensor([[ 0.0000, -1.1356, 0.0000, 1.3578, -0.0000, 0.6912, -0.9134, 1.1356, -1.3578, 1.5780]])
📊

Quick Reference

ParameterDescription
pDropout probability (float between 0 and 1)
train()Enables dropout during training
eval()Disables dropout during evaluation
forward(x)Apply dropout to input tensor x during training

Key Takeaways

Use nn.Dropout(p) inside your model to randomly zero inputs during training for regularization.
Always call model.train() to enable dropout and model.eval() to disable it during evaluation.
Dropout probability p controls how many inputs are dropped; typical values are 0.2 to 0.5.
Do not apply dropout outside the model's forward method or forget to switch modes.
Dropout helps reduce overfitting by preventing co-adaptation of neurons.