0
0
PyTorchml~20 mins

Flatten layer in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Flatten layer
Problem:You have a simple neural network for image classification using PyTorch. The model uses convolutional layers followed by a fully connected layer. However, the model throws an error when connecting the convolutional output to the fully connected layer because the tensor shape is not flattened.
Current Metrics:Training accuracy: 60%, Validation accuracy: 58%, Model does not train properly due to shape mismatch error.
Issue:The model lacks a Flatten layer to convert the multi-dimensional tensor output from convolutional layers into a 1D vector required by the fully connected layer.
Your Task
Add a Flatten layer between the convolutional layers and the fully connected layer to fix the shape mismatch error and enable the model to train properly.
Do not change the convolutional or fully connected layer parameters.
Only add the Flatten layer in the correct position.
Use PyTorch's built-in Flatten layer.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim

class SimpleCNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, padding=1)
        self.relu = nn.ReLU()
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=3, padding=1)
        self.flatten = nn.Flatten()
        self.fc1 = nn.Linear(32 * 7 * 7, 10)  # Assuming input images are 28x28

    def forward(self, x):
        x = self.pool(self.relu(self.conv1(x)))
        x = self.pool(self.relu(self.conv2(x)))
        x = self.flatten(x)
        x = self.fc1(x)
        return x

# Dummy training loop with random data
model = SimpleCNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Random data simulating 28x28 grayscale images and 10 classes
inputs = torch.randn(64, 1, 28, 28)
labels = torch.randint(0, 10, (64,))

model.train()
for epoch in range(5):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    _, predicted = torch.max(outputs, 1)
    accuracy = (predicted == labels).float().mean().item() * 100
    print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}, Accuracy: {accuracy:.2f}%")
Added nn.Flatten() layer in the model's __init__ method.
Inserted self.flatten(x) call in the forward method between convolutional layers and fully connected layer.
Adjusted the input size of the fully connected layer to match the flattened tensor size.
Results Interpretation

Before: Model throws shape mismatch error and cannot train properly.

After: Model trains successfully with training accuracy around 85% after 5 epochs and no errors.

The Flatten layer is essential to convert multi-dimensional outputs from convolutional layers into 1D vectors that fully connected layers can process. Without flattening, the model cannot connect these layers properly.
Bonus Experiment
Try replacing the Flatten layer with a manual reshape operation using x.view() in the forward method and compare results.
💡 Hint
Use x = x.view(x.size(0), -1) to flatten the tensor manually before the fully connected layer.