0
0
PyTorchml~10 mins

__init__ for layers in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define the constructor of a PyTorch layer class.

PyTorch
class MyLayer(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear([1], 10)
Drag options to blanks, or click blank then click option'
ANone
B10
C0
D5
Attempts:
3 left
💡 Hint
Common Mistakes
Using None or 0 as input size causes errors.
Confusing input and output sizes.
2fill in blank
medium

Complete the code to initialize a convolutional layer with 3 input channels and 16 output channels.

PyTorch
class ConvLayer(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Conv2d([1], 16, kernel_size=3)
Drag options to blanks, or click blank then click option'
A16
B32
C3
D1
Attempts:
3 left
💡 Hint
Common Mistakes
Using output channels as input channels.
Using kernel size as input channels.
3fill in blank
hard

Fix the error in the layer initialization by filling the blank correctly.

PyTorch
class CustomLayer(nn.Module):
    def __init__(self, input_dim, output_dim):
        super().__init__()
        self.fc = nn.Linear(input_dim, [1])
Drag options to blanks, or click blank then click option'
Aoutput_dim
Binput_dim
Cinput_dim * 2
DNone
Attempts:
3 left
💡 Hint
Common Mistakes
Using input_dim for both input and output sizes.
Using None as output size.
4fill in blank
hard

Fill both blanks to initialize a dropout layer with 0.5 dropout rate and a ReLU activation.

PyTorch
class DropoutReluLayer(nn.Module):
    def __init__(self):
        super().__init__()
        self.dropout = nn.Dropout([1])
        self.activation = nn.[2]()
Drag options to blanks, or click blank then click option'
A0.5
B0.3
CReLU
DSigmoid
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong dropout rate like 0.3.
Using Sigmoid instead of ReLU.
5fill in blank
hard

Fill all three blanks to define a layer with batch normalization, dropout, and linear transformation.

PyTorch
class ComplexLayer(nn.Module):
    def __init__(self, input_size, output_size):
        super().__init__()
        self.batchnorm = nn.BatchNorm1d([1])
        self.dropout = nn.Dropout([2])
        self.linear = nn.Linear([3], output_size)
Drag options to blanks, or click blank then click option'
Ainput_size
B0.2
Doutput_size
Attempts:
3 left
💡 Hint
Common Mistakes
Using output_size for batchnorm or linear input.
Using dropout rate greater than 1.