0
0
PyTorchml~5 mins

Linear (fully connected) layers in PyTorch

Choose your learning style9 modes available
Introduction
A linear layer connects every input to every output with a weight, helping the model learn simple relationships between data.
When you want to transform input features into output features in a neural network.
When building a simple classifier that needs to combine all input information.
When you want to reduce or expand the size of data features in a model.
When connecting layers in a deep learning model to learn patterns.
When you want to predict continuous values from input features.
Syntax
PyTorch
torch.nn.Linear(in_features, out_features, bias=True)
in_features is the number of input values per data point.
out_features is the number of output values the layer produces.
Examples
Creates a linear layer that takes 10 inputs and outputs 5 values.
PyTorch
layer = torch.nn.Linear(10, 5)
Creates a linear layer with 3 inputs and 1 output without adding a bias term.
PyTorch
layer = torch.nn.Linear(3, 1, bias=False)
Sample Model
This code creates a linear layer that transforms 4 input features into 2 output features. It then passes a batch of 3 inputs through the layer and prints the results.
PyTorch
import torch
import torch.nn as nn

# Create a linear layer with 4 inputs and 2 outputs
linear_layer = nn.Linear(4, 2)

# Example input: batch of 3 data points, each with 4 features
input_data = torch.tensor([[1.0, 2.0, 3.0, 4.0],
                           [4.0, 3.0, 2.0, 1.0],
                           [0.5, 0.5, 0.5, 0.5]])

# Pass input through the linear layer
output = linear_layer(input_data)

# Print output shape and values
print(f"Output shape: {output.shape}")
print(f"Output values:\n{output}")
OutputSuccess
Important Notes
The weights and bias in a linear layer are learned during training.
The output shape is always (batch_size, out_features).
You can access the weights with layer.weight and bias with layer.bias.
Summary
Linear layers connect every input to every output with weights and optional bias.
They transform input features into output features in neural networks.
They are simple but powerful building blocks for many models.