Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to define the constructor of a PyTorch layer class.
PyTorch
class MyLayer(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear([1], 10)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using None or 0 as input size causes errors.
Confusing input and output sizes.
✗ Incorrect
The input size to nn.Linear must be specified. Here, 5 is the input feature size.
2fill in blank
mediumComplete the code to initialize a convolutional layer with 3 input channels and 16 output channels.
PyTorch
class ConvLayer(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d([1], 16, kernel_size=3)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using output channels as input channels.
Using kernel size as input channels.
✗ Incorrect
The first argument to nn.Conv2d is the number of input channels, here 3 for RGB images.
3fill in blank
hardFix the error in the layer initialization by filling the blank correctly.
PyTorch
class CustomLayer(nn.Module): def __init__(self, input_dim, output_dim): super().__init__() self.fc = nn.Linear(input_dim, [1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using input_dim for both input and output sizes.
Using None as output size.
✗ Incorrect
The second argument to nn.Linear is the output dimension, which should be output_dim, not input_dim.
4fill in blank
hardFill both blanks to initialize a dropout layer with 0.5 dropout rate and a ReLU activation.
PyTorch
class DropoutReluLayer(nn.Module): def __init__(self): super().__init__() self.dropout = nn.Dropout([1]) self.activation = nn.[2]()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong dropout rate like 0.3.
Using Sigmoid instead of ReLU.
✗ Incorrect
Dropout rate is 0.5 and activation is ReLU as requested.
5fill in blank
hardFill all three blanks to define a layer with batch normalization, dropout, and linear transformation.
PyTorch
class ComplexLayer(nn.Module): def __init__(self, input_size, output_size): super().__init__() self.batchnorm = nn.BatchNorm1d([1]) self.dropout = nn.Dropout([2]) self.linear = nn.Linear([3], output_size)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using output_size for batchnorm or linear input.
Using dropout rate greater than 1.
✗ Incorrect
BatchNorm1d uses input_size, dropout rate is 0.2, and linear layer input is input_size.