Challenge - 5 Problems
PyTorch Model Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of a simple PyTorch model forward pass
What is the output shape of the tensor after running the forward method of this model with input shape (10, 5)?
PyTorch
import torch import torch.nn as nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(5, 3) def forward(self, x): return self.linear(x) model = SimpleModel() input_tensor = torch.randn(10, 5) output = model(input_tensor) output_shape = output.shape print(output_shape)
Attempts:
2 left
💡 Hint
Remember the linear layer changes the last dimension from input features to output features.
✗ Incorrect
The input tensor has shape (10, 5). The linear layer maps 5 features to 3 features, so the output shape is (10, 3).
❓ Model Choice
intermediate2:00remaining
Choosing the correct model class definition
Which of the following PyTorch model class definitions correctly implements a model with one hidden layer of size 4 and ReLU activation?
Attempts:
2 left
💡 Hint
Check the input and output sizes of each layer and the order of activation.
✗ Incorrect
Option D correctly defines a hidden layer from 3 to 4 units, applies ReLU, then outputs from 4 to 2 units. Other options have incorrect layer sizes or apply activation after the output layer.
❓ Hyperparameter
advanced2:00remaining
Effect of changing hidden layer size in a model class
If you increase the hidden layer size from 10 to 100 in a fully connected PyTorch model, which of the following is a likely effect?
Attempts:
2 left
💡 Hint
Think about how model size affects capacity and training behavior.
✗ Incorrect
Increasing hidden layer size increases parameters, which can lead to overfitting if not controlled. It usually slows training and uses more memory.
🔧 Debug
advanced2:00remaining
Identify the error in this PyTorch model class
What error will this PyTorch model class raise when instantiated?
PyTorch
import torch.nn as nn class FaultyModel(nn.Module): def __init__(self): self.linear = nn.Linear(5, 2) def forward(self, x): return self.linear(x) model = FaultyModel()
Attempts:
2 left
💡 Hint
Check the __init__ method and superclass initialization.
✗ Incorrect
The __init__ method does not call super().__init__(), so nn.Module is not properly initialized, causing AttributeError when accessing layers.
🧠 Conceptual
expert2:00remaining
Why define a model class instead of using nn.Sequential?
Which reason best explains why you might define a custom PyTorch model class instead of using nn.Sequential?
Attempts:
2 left
💡 Hint
Think about flexibility in model design and forward pass control.
✗ Incorrect
Custom model classes allow you to write any forward pass logic, including conditionals and multiple inputs, which nn.Sequential cannot handle.