0
0
NLPml~10 mins

Encoder-decoder with attention in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define the encoder embedding layer.

NLP
self.embedding = nn.[1](input_dim, embed_dim)
Drag options to blanks, or click blank then click option'
ALinear
BLSTM
CConv1d
DEmbedding
Attempts:
3 left
💡 Hint
Common Mistakes
Using Linear instead of Embedding for token lookup.
Confusing LSTM with embedding layer.
2fill in blank
medium

Complete the code to compute attention weights using softmax.

NLP
attn_weights = F.[1](scores, dim=1)
Drag options to blanks, or click blank then click option'
Atanh
Bsoftmax
Crelu
Dsigmoid
Attempts:
3 left
💡 Hint
Common Mistakes
Using sigmoid which outputs independent probabilities.
Using relu or tanh which do not normalize.
3fill in blank
hard

Fix the error in the decoder's context vector calculation.

NLP
context = torch.bmm(attn_weights.unsqueeze(1), encoder_outputs).[1](1)
Drag options to blanks, or click blank then click option'
Asqueeze
Bunsqueeze
Cview
Dpermute
Attempts:
3 left
💡 Hint
Common Mistakes
Using unsqueeze adds a dimension instead of removing.
Using permute changes dimension order incorrectly.
4fill in blank
hard

Fill both blanks to complete the attention score calculation using dot product.

NLP
scores = torch.bmm(encoder_outputs, decoder_hidden.[1](2, 1, 0)).[2](2, 1, 0)
Drag options to blanks, or click blank then click option'
Aunsqueeze
Bsqueeze
Cpermute
Dview
Attempts:
3 left
💡 Hint
Common Mistakes
Using squeeze or unsqueeze which changes dimension count incorrectly.
Using view which reshapes but does not reorder dimensions.
5fill in blank
hard

Fill all three blanks to complete the decoder forward pass with attention.

NLP
embedded = self.embedding(input_step)
attn_weights = F.softmax(torch.bmm(encoder_outputs, decoder_hidden.[1](2, 1, 0)), dim=1)
context = torch.bmm(attn_weights.unsqueeze(1), encoder_outputs).[2](1)
rnn_input = torch.cat((embedded, context), dim=[3])
Drag options to blanks, or click blank then click option'
Aunsqueeze
Bsqueeze
C2
Dpermute
Attempts:
3 left
💡 Hint
Common Mistakes
Using unsqueeze instead of permute for decoder hidden state.
Concatenating on wrong dimension causing shape errors.