0
0
TensorFlowml~20 mins

Why CNNs understand visual patterns in TensorFlow - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
CNN Visual Pattern Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why do convolutional layers use small filters?

Convolutional Neural Networks (CNNs) use filters (kernels) to scan images. Why do these filters usually have small sizes like 3x3 or 5x5?

ASmall filters capture local patterns and reduce the number of parameters, making learning efficient.
BSmall filters increase the image size to help the network see more details.
CSmall filters are used to convert images into grayscale before processing.
DSmall filters randomly remove pixels to reduce noise in images.
Attempts:
2 left
💡 Hint

Think about how images have small details like edges and textures.

Predict Output
intermediate
2:00remaining
Output shape after convolution

Given a 28x28 grayscale image input to a convolutional layer with 16 filters of size 3x3, stride 1, and 'valid' padding, what is the output shape?

TensorFlow
import tensorflow as tf
input_shape = (1, 28, 28, 1)  # batch size 1, height 28, width 28, channels 1
inputs = tf.random.normal(input_shape)
conv_layer = tf.keras.layers.Conv2D(filters=16, kernel_size=3, strides=1, padding='valid')
outputs = conv_layer(inputs)
print(outputs.shape)
A(1, 27, 27, 16)
B(1, 28, 28, 16)
C(1, 25, 25, 16)
D(1, 26, 26, 16)
Attempts:
2 left
💡 Hint

Use formula: output_size = input_size - filter_size + 1 for 'valid' padding.

Model Choice
advanced
2:30remaining
Choosing CNN architecture for texture recognition

You want to build a CNN to recognize textures in images. Which architecture choice helps the network better capture fine texture details?

AUse dropout layers only without convolution to prevent overfitting.
BUse a single large 11x11 filter in the first layer to capture all texture at once.
CUse deeper layers with small 3x3 filters and max pooling to gradually capture complex patterns.
DUse only fully connected layers without convolution to focus on global features.
Attempts:
2 left
💡 Hint

Think about how stacking small filters helps build complex features step-by-step.

Metrics
advanced
2:00remaining
Interpreting CNN training loss and accuracy

During CNN training on image classification, you observe the training loss steadily decreases but validation accuracy plateaus early. What does this indicate?

AThe model has perfect generalization and no further improvement is possible.
BThe model is overfitting the training data and not generalizing well to new images.
CThe model is underfitting and needs more training epochs.
DThe training data is corrupted causing loss to decrease but accuracy to plateau.
Attempts:
2 left
💡 Hint

Think about what it means when training improves but validation does not.

🔧 Debug
expert
3:00remaining
Why does this CNN fail to learn meaningful features?

Consider this TensorFlow CNN code snippet. After training, the model accuracy stays near random guessing. What is the main issue?

TensorFlow
import tensorflow as tf
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu', input_shape=(28,28,1)),
    tf.keras.layers.MaxPooling2D(pool_size=2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Training code omitted
AThe model lacks normalization layers like BatchNormalization, causing unstable training.
BThe model uses 'relu' activation which is not suitable for CNNs.
CThe input shape is incorrect; it should be (28,28) without channel dimension.
DThe loss function 'sparse_categorical_crossentropy' is incompatible with softmax activation.
Attempts:
2 left
💡 Hint

Think about what helps CNNs train stably on image data.