0
0
Simulinkdata~10 mins

Transformer modeling in Simulink - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define the input layer for a transformer model in Simulink.

Simulink
inputLayer = sequenceInputLayer([1]);
Drag options to blanks, or click blank then click option'
A256
B512
C1024
D128
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing a size that does not match the embedding dimension.
Confusing sequence length with embedding size.
2fill in blank
medium

Complete the code to add a multi-head attention layer with 8 heads in Simulink.

Simulink
multiHeadAttn = multiHeadAttentionLayer([1], 'Name', 'multiHeadAttn');
Drag options to blanks, or click blank then click option'
A8
B4
C16
D2
Attempts:
3 left
💡 Hint
Common Mistakes
Using too few or too many heads without adjusting other parameters.
Confusing number of heads with embedding size.
3fill in blank
hard

Fix the error in the code to correctly add a position-wise feedforward layer in Simulink.

Simulink
feedForward = fullyConnectedLayer([1], 'Name', 'feedForward');
Drag options to blanks, or click blank then click option'
A512
B2048
C256
D128
Attempts:
3 left
💡 Hint
Common Mistakes
Using the embedding size instead of the larger feedforward size.
Choosing a size too small to capture complex features.
4fill in blank
hard

Fill both blanks to correctly create a transformer encoder block with normalization and dropout layers.

Simulink
encoderBlock = [layerNormalizationLayer('Name', 'norm1'), dropoutLayer([1], 'Name', 'dropout1'), layerNormalizationLayer('Name', 'norm2'), dropoutLayer([2], 'Name', 'dropout2')];
Drag options to blanks, or click blank then click option'
A0.1
B0.2
C0.3
D0.4
Attempts:
3 left
💡 Hint
Common Mistakes
Using dropout rates that are too high, causing underfitting.
Using different dropout rates for the two dropout layers.
5fill in blank
hard

Fill all three blanks to define the transformer model layers including input, encoder, and output layers.

Simulink
layers = [sequenceInputLayer([1], 'Name', 'input'), transformerEncoderLayer([2], [3], 'Name', 'encoder'), fullyConnectedLayer(10, 'Name', 'output')];
Drag options to blanks, or click blank then click option'
A128
B8
C2048
D256
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up the order of parameters in the encoder layer.
Using inconsistent sizes that don't match transformer architecture.