0
0
PyTorchml~10 mins

Transformer encoder in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the TransformerEncoder class from PyTorch.

PyTorch
from torch.nn import [1]
Drag options to blanks, or click blank then click option'
ATransformerEncoder
BTransformerLayer
CTransformer
DTransformerDecoder
Attempts:
3 left
💡 Hint
Common Mistakes
Importing TransformerDecoder instead of TransformerEncoder.
Trying to import a non-existent class like TransformerLayer.
2fill in blank
medium

Complete the code to create a TransformerEncoderLayer with 512 embedding size.

PyTorch
encoder_layer = torch.nn.TransformerEncoderLayer(d_model=[1], nhead=8)
Drag options to blanks, or click blank then click option'
A512
B256
C1024
D128
Attempts:
3 left
💡 Hint
Common Mistakes
Using a smaller or larger embedding size like 256 or 1024 without reason.
Confusing d_model with number of heads.
3fill in blank
hard

Fix the error in the code by completing the blank to create a TransformerEncoder with 6 layers.

PyTorch
transformer_encoder = torch.nn.TransformerEncoder(encoder_layer, num_layers=[1])
Drag options to blanks, or click blank then click option'
A8
B6
C4
D2
Attempts:
3 left
💡 Hint
Common Mistakes
Using too few or too many layers like 2 or 8.
Confusing num_layers with number of attention heads.
4fill in blank
hard

Fill both blanks to create a mask tensor of shape (10, 10) filled with -inf for masked positions.

PyTorch
mask = torch.full((10, 10), [1])
mask = mask.masked_fill(mask == [2], float('-inf'))
Drag options to blanks, or click blank then click option'
A0
B1
C-float('inf')
D-1
Attempts:
3 left
💡 Hint
Common Mistakes
Filling the tensor with 0s instead of 1s.
Using -1 or -inf directly in the fill method.
5fill in blank
hard

Fill all three blanks to create a TransformerEncoderLayer, pass input through it, and get output shape.

PyTorch
encoder_layer = torch.nn.TransformerEncoderLayer(d_model=[1], nhead=[2])
input_tensor = torch.rand(5, 32, [3])  # (sequence_length, batch_size, embedding_dim)
output = encoder_layer(input_tensor)
output_shape = output.shape
Drag options to blanks, or click blank then click option'
A512
B8
D256
Attempts:
3 left
💡 Hint
Common Mistakes
Mismatching embedding sizes between layer and input tensor.
Using wrong number of heads for the embedding size.