0
0
Computer Visionml~20 mins

Inception modules in Computer Vision - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Inception Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
What is the main purpose of an Inception module in a convolutional neural network?

Imagine you want your model to look at an image in many ways at once, like using different sized glasses to see details and the big picture. What is the main purpose of an Inception module?

ATo replace convolutional layers with fully connected layers for faster training.
BTo reduce the number of layers in the network by merging all convolutions into one.
CTo combine multiple convolution filters of different sizes to capture features at various scales simultaneously.
DTo increase the image size before feeding it into the network.
Attempts:
2 left
💡 Hint

Think about how the module uses different filter sizes in parallel.

Predict Output
intermediate
2:00remaining
What is the output shape of this Inception module snippet?

Given an input tensor of shape (batch_size, 28, 28, 192), what will be the output shape after this Inception module block?

Computer Vision
import tensorflow as tf
from tensorflow.keras import layers

input_tensor = tf.keras.Input(shape=(28, 28, 192))

branch1 = layers.Conv2D(64, (1,1), padding='same', activation='relu')(input_tensor)
branch2 = layers.Conv2D(96, (1,1), padding='same', activation='relu')(input_tensor)
branch2 = layers.Conv2D(128, (3,3), padding='same', activation='relu')(branch2)
branch3 = layers.Conv2D(16, (1,1), padding='same', activation='relu')(input_tensor)
branch3 = layers.Conv2D(32, (5,5), padding='same', activation='relu')(branch3)
branch4 = layers.MaxPooling2D((3,3), strides=(1,1), padding='same')(input_tensor)
branch4 = layers.Conv2D(32, (1,1), padding='same', activation='relu')(branch4)

output = layers.concatenate([branch1, branch2, branch3, branch4], axis=-1)
model = tf.keras.Model(inputs=input_tensor, outputs=output)
print(model.output_shape)
A(None, 28, 28, 256)
B(None, 28, 28, 224)
C(None, 28, 28, 192)
D(None, 28, 28, 320)
Attempts:
2 left
💡 Hint

Add the number of filters from all branches for the last dimension.

Model Choice
advanced
2:00remaining
Which model architecture first introduced the Inception module?

Which famous convolutional neural network architecture first used the Inception module to improve performance and efficiency?

AAlexNet
BGoogLeNet (Inception v1)
CVGGNet
DResNet
Attempts:
2 left
💡 Hint

It is also called Inception v1 and won the ImageNet challenge in 2014.

🧠 Conceptual
advanced
2:00remaining
What technique in the Inception module uses 1x1 convolutions for dimensionality reduction before expensive convolutions?

In Inception modules, 1x1 convolutions are used before larger convolutions to reduce the number of channels. What is this technique called?

ABatch normalization
BPooling
CDropout
DDimensionality reduction
Attempts:
2 left
💡 Hint

It helps reduce computation by lowering the number of input channels.

🔧 Debug
expert
3:00remaining
Why does this Inception module code raise a shape mismatch error?

Consider this code snippet for an Inception module. It raises a shape mismatch error during concatenation. What is the cause?

Computer Vision
import tensorflow as tf
from tensorflow.keras import layers

input_tensor = tf.keras.Input(shape=(28, 28, 192))

branch1 = layers.Conv2D(64, (3,3), padding='valid', activation='relu')(input_tensor)
branch2 = layers.Conv2D(96, (1,1), padding='same', activation='relu')(input_tensor)
branch2 = layers.Conv2D(128, (3,3), padding='same', activation='relu')(branch2)
branch3 = layers.Conv2D(16, (1,1), padding='same', activation='relu')(input_tensor)
branch3 = layers.Conv2D(32, (5,5), padding='same', activation='relu')(branch3)
branch4 = layers.MaxPooling2D((3,3), strides=(1,1), padding='same')(input_tensor)
branch4 = layers.Conv2D(32, (1,1), padding='same', activation='relu')(branch4)

output = layers.concatenate([branch1, branch2, branch3, branch4], axis=-1)
model = tf.keras.Model(inputs=input_tensor, outputs=output)
print(model.output_shape)
Abranch1 uses 'valid' padding causing smaller spatial dimensions than other branches.
Bbranch4 uses max pooling with stride 1 causing shape mismatch.
CThe input tensor shape is incompatible with 5x5 convolutions.
DConcatenation axis is incorrect; it should be axis=1.
Attempts:
2 left
💡 Hint

Check how padding affects output size in convolutions.