Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to import the PyTorch module needed to define a model.
PyTorch
import torch.nn as [1]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Importing 'optim' instead of 'nn' for model definition.
Using 'data' which is for datasets, not models.
✗ Incorrect
The 'nn' module in PyTorch contains classes to build neural network models.
2fill in blank
mediumComplete the code to define a model class that inherits from the correct PyTorch base class.
PyTorch
class MyModel([1]): def __init__(self): super().__init__()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Inheriting from nn.Sequential which is a container, not a base class.
Using nn.Linear which is a layer, not a base class.
✗ Incorrect
All PyTorch models should inherit from nn.Module to work properly.
3fill in blank
hardFix the error in the forward method definition by completing the method signature correctly.
PyTorch
def [1](self, x): return self.layer(x)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Naming the method 'predict' which is not recognized by PyTorch.
Using 'call' or 'run' which are not valid method names here.
✗ Incorrect
The forward method must be named 'forward' for PyTorch to know how to compute outputs.
4fill in blank
hardFill both blanks to define a linear layer with 10 inputs and 5 outputs inside the model's __init__ method.
PyTorch
self.layer = nn.[1]([2], 5)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using Conv2d which is for images, not linear layers.
Putting the output size before the input size.
✗ Incorrect
The linear layer is created with nn.Linear and input size 10 for 5 output features.
5fill in blank
hardFill all three blanks to complete the forward method that applies the linear layer and then a ReLU activation.
PyTorch
def forward(self, x): x = self.layer(x) x = nn.[1]()([2]) return [3]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using Sigmoid instead of ReLU as activation here.
Returning 'self' instead of the output tensor.
✗ Incorrect
After the linear layer, ReLU activation is applied to x, and x is returned.