Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to add a skip connection by summing the input tensor with the output tensor.
Computer Vision
def residual_block(x): out = some_layer(x) out = some_activation(out) return out [1] x
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using subtraction or multiplication instead of addition.
Forgetting to add the input tensor back.
✗ Incorrect
In ResNet, the skip connection adds the input tensor to the output tensor using the '+' operator.
2fill in blank
mediumComplete the code to apply batch normalization after the convolution layer in the residual block.
Computer Vision
def residual_block(x): out = Conv2D(64, (3, 3), padding='same')(x) out = [1](out) out = Activation('relu')(out) return out + x
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using Dropout or pooling layers instead of normalization.
Placing BatchNormalization after activation instead of before.
✗ Incorrect
BatchNormalization is used to normalize the output of the convolution layer before activation.
3fill in blank
hardFix the error in the code to correctly implement a skip connection when the input and output shapes differ.
Computer Vision
def residual_block(x): out = Conv2D(128, (3, 3), strides=2, padding='same')(x) out = BatchNormalization()(out) out = Activation('relu')(out) shortcut = [1](x) return out + shortcut
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using pooling or dense layers which change shape incorrectly.
Not adjusting the shortcut shape causing shape mismatch errors.
✗ Incorrect
When input and output shapes differ, a 1x1 convolution with matching strides adjusts the shortcut to the correct shape.
4fill in blank
hardFill both blanks to create a residual block that applies two convolution layers with batch normalization and adds the skip connection.
Computer Vision
def residual_block(x): out = Conv2D(64, (3, 3), padding='same')(x) out = [1](out) out = Activation('relu')(out) out = Conv2D(64, (3, 3), padding='same')(out) out = [2](out) return out + x
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using Dropout or pooling layers instead of BatchNormalization.
Skipping normalization after the second convolution.
✗ Incorrect
BatchNormalization is applied after each convolution layer before activation or addition.
5fill in blank
hardFill all three blanks to define a ResNet block that adjusts the shortcut path and applies two convolution layers with batch normalization.
Computer Vision
def resnet_block(x): shortcut = [1](x) out = Conv2D(128, (3, 3), strides=2, padding='same')(x) out = [2](out) out = Activation('relu')(out) out = Conv2D(128, (3, 3), padding='same')(out) out = [3](out) return out + shortcut
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using pooling instead of convolution for shortcut.
Skipping batch normalization layers.
✗ Incorrect
The shortcut uses a 1x1 convolution to match shape, and batch normalization is applied after each convolution layer.