0
0
PytorchHow-ToBeginner · 3 min read

How to Use Activation Functions in PyTorch: Simple Guide

In PyTorch, activation functions are used by importing them from torch.nn.functional or torch.nn and applying them to tensors or layers. You can call functions like torch.nn.functional.relu() directly on your data or use activation layers like torch.nn.ReLU() inside your model.
📐

Syntax

Activation functions in PyTorch can be used in two main ways:

  • Functional API: Call activation functions directly on tensors using torch.nn.functional. For example, F.relu(input_tensor).
  • Module API: Use activation layers as part of a model by creating instances like nn.ReLU() and calling them as functions.

This flexibility lets you use activations either inline or as layers.

python
import torch
import torch.nn as nn
import torch.nn.functional as F

# Functional API usage
input_tensor = torch.tensor([-1.0, 0.0, 1.0, 2.0])
output = F.relu(input_tensor)

# Module API usage
relu_layer = nn.ReLU()
output_layer = relu_layer(input_tensor)
Output
tensor([0., 0., 1., 2.])
💻

Example

This example shows how to apply the ReLU activation function using both the functional and module approaches in PyTorch. It prints the input tensor and the activated outputs.

python
import torch
import torch.nn as nn
import torch.nn.functional as F

input_tensor = torch.tensor([-2.0, -0.5, 0.0, 0.5, 2.0])

# Using functional API
relu_output = F.relu(input_tensor)

# Using module API
relu_layer = nn.ReLU()
relu_layer_output = relu_layer(input_tensor)

print('Input:', input_tensor)
print('ReLU Functional Output:', relu_output)
print('ReLU Module Output:', relu_layer_output)
Output
Input: tensor([-2.0000, -0.5000, 0.0000, 0.5000, 2.0000]) ReLU Functional Output: tensor([0.0000, 0.0000, 0.0000, 0.5000, 2.0000]) ReLU Module Output: tensor([0.0000, 0.0000, 0.0000, 0.5000, 2.0000])
⚠️

Common Pitfalls

Common mistakes when using activation functions in PyTorch include:

  • Forgetting to import torch.nn.functional as F when using the functional API.
  • Calling activation layers without parentheses, e.g., using nn.ReLU instead of nn.ReLU().
  • Applying activation functions to the wrong tensor shape or forgetting to move tensors to the correct device (CPU/GPU).
  • Mixing functional and module APIs incorrectly inside models.
python
import torch
import torch.nn as nn
import torch.nn.functional as F

input_tensor = torch.tensor([-1.0, 0.0, 1.0])

# Wrong: missing parentheses for module
# relu_layer = nn.ReLU  # This is a class, not an instance
# output = relu_layer(input_tensor)  # This will raise an error

# Correct usage
relu_layer = nn.ReLU()
output = relu_layer(input_tensor)

# Wrong: forgetting to import functional
# output_func = relu(input_tensor)  # NameError

# Correct functional usage
output_func = F.relu(input_tensor)
📊

Quick Reference

Activation FunctionFunctional APIModule APIDescription
ReLUF.relu(input)nn.ReLU()Sets negative values to zero
Sigmoidtorch.sigmoid(input)nn.Sigmoid()Maps values to (0,1) range
Tanhtorch.tanh(input)nn.Tanh()Maps values to (-1,1) range
LeakyReLUF.leaky_relu(input)nn.LeakyReLU()Allows small negative slope
SoftmaxF.softmax(input, dim=)nn.Softmax(dim=)Converts to probability distribution

Key Takeaways

Use activation functions from torch.nn.functional for direct calls or torch.nn modules for layers.
Always instantiate activation layers with parentheses before using them.
Check tensor shapes and device placement before applying activations.
Common activations include ReLU, Sigmoid, Tanh, LeakyReLU, and Softmax.
Mixing functional and module APIs is fine but be consistent and clear in your code.