0
0
TensorFlowml~20 mins

Feature map visualization in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Feature map visualization
Problem:You have trained a convolutional neural network (CNN) on the MNIST dataset to classify handwritten digits. You want to understand what the CNN learns by visualizing the feature maps (activation outputs) of the first convolutional layer for a sample input image.
Current Metrics:Training accuracy: 98%, Validation accuracy: 97%
Issue:Although the model performs well, you cannot see what features the CNN extracts from the input images. You want to visualize the feature maps to gain insight into the learned filters.
Your Task
Visualize the feature maps produced by the first convolutional layer of the trained CNN for a given input image from the test set.
Use TensorFlow and Keras only.
Do not retrain the model; use the existing trained model.
Visualize feature maps as grayscale images in a grid.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models
import matplotlib.pyplot as plt
import numpy as np

# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Normalize and reshape data
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
x_train = np.expand_dims(x_train, -1)  # shape (num_samples, 28, 28, 1)
x_test = np.expand_dims(x_test, -1)

# Define a simple CNN model
model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)),
    layers.MaxPooling2D((2,2)),
    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D((2,2)),
    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dense(10, activation='softmax')
])

# Compile and train the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3, batch_size=64, validation_split=0.1, verbose=0)

# Select one test image
test_img = x_test[0]

# Create a model to output the feature maps of the first Conv2D layer
layer_outputs = [layer.output for layer in model.layers if isinstance(layer, layers.Conv2D)]
first_conv_layer_model = models.Model(inputs=model.input, outputs=layer_outputs[0])

# Get feature maps for the test image
feature_maps = first_conv_layer_model.predict(np.expand_dims(test_img, axis=0))  # shape (1, 26, 26, 32)

# Plot the feature maps
num_filters = feature_maps.shape[-1]
size = feature_maps.shape[1]

cols = 8
rows = num_filters // cols
plt.figure(figsize=(cols*1.5, rows*1.5))
for i in range(num_filters):
    plt.subplot(rows, cols, i+1)
    plt.imshow(feature_maps[0, :, :, i], cmap='gray')
    plt.axis('off')
plt.suptitle('Feature maps of first Conv2D layer')
plt.show()
Created a new model that outputs the activations of the first convolutional layer.
Selected one test image and preprocessed it to match model input shape.
Used matplotlib to plot each feature map as a grayscale image in a grid.
Results Interpretation

Before: Model accuracy was high but no insight into learned features.

After: Visualized 32 feature maps showing different patterns the first convolutional layer detects from the input image.

Visualizing feature maps helps understand what patterns a CNN learns at different layers, improving interpretability without changing model performance.
Bonus Experiment
Visualize feature maps from the second convolutional layer and compare them with the first layer's feature maps.
💡 Hint
Create a similar model outputting the second Conv2D layer activations and plot them. Notice how deeper layers capture more complex features.