0
0
NLPml~10 mins

Sequence-to-sequence architecture in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to create an embedding layer for the input sequence.

NLP
embedding_layer = nn.Embedding(num_embeddings=[1], embedding_dim=256)
Drag options to blanks, or click blank then click option'
Avocab_size
Bsequence_length
Cbatch_size
Dhidden_size
Attempts:
3 left
💡 Hint
Common Mistakes
Using sequence length instead of vocabulary size
Using batch size as number of embeddings
2fill in blank
medium

Complete the code to initialize the encoder LSTM with the correct input size.

NLP
encoder_lstm = nn.LSTM(input_size=[1], hidden_size=512, batch_first=True)
Drag options to blanks, or click blank then click option'
Aembedding_dim
Bsequence_length
Cvocab_size
Dhidden_size
Attempts:
3 left
💡 Hint
Common Mistakes
Using vocabulary size as input size
Using sequence length as input size
3fill in blank
hard

Fix the error in the decoder forward pass by selecting the correct input to the decoder LSTM.

NLP
decoder_output, (hidden, cell) = decoder_lstm([1], (hidden, cell))
Drag options to blanks, or click blank then click option'
Adecoder_hidden
Bencoder_output
Cencoder_input
Ddecoder_input
Attempts:
3 left
💡 Hint
Common Mistakes
Passing encoder output as input
Passing encoder input as input
4fill in blank
hard

Fill both blanks to complete the attention score calculation using dot product.

NLP
attention_scores = torch.bmm(encoder_outputs, [1].unsqueeze(2)).squeeze(2)
attention_weights = torch.softmax(attention_scores, dim=[2])
Drag options to blanks, or click blank then click option'
Adecoder_hidden
Bencoder_hidden
C1
D2
Attempts:
3 left
💡 Hint
Common Mistakes
Using encoder hidden instead of decoder hidden
Applying softmax over wrong dimension
5fill in blank
hard

Fill all three blanks to complete the decoder output calculation with attention context.

NLP
context = torch.bmm(attention_weights.unsqueeze(1), [1])
combined = torch.cat((context.squeeze(1), [2]), dim=[3])
output = decoder_fc(combined)
Drag options to blanks, or click blank then click option'
Aencoder_outputs
Bdecoder_output.squeeze(1)
C1
D0
Attempts:
3 left
💡 Hint
Common Mistakes
Using decoder output instead of encoder outputs for context
Concatenating along wrong dimension