0
0
TensorFlowml~20 mins

CNN architecture for image classification in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - CNN architecture for image classification
Problem:Classify images from the CIFAR-10 dataset into 10 categories using a convolutional neural network (CNN).
Current Metrics:Training accuracy: 98%, Validation accuracy: 72%, Training loss: 0.05, Validation loss: 1.0
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting so that validation accuracy improves to above 85% while keeping training accuracy below 92%.
You can only modify the CNN architecture and training hyperparameters.
Do not change the dataset or preprocessing steps.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Load CIFAR-10 dataset
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()

# Normalize pixel values
X_train, X_test = X_train / 255.0, X_test / 255.0

# Build CNN model with dropout and batch normalization
model = models.Sequential([
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
    layers.BatchNormalization(),
    layers.MaxPooling2D((2, 2)),
    layers.Dropout(0.25),

    layers.Conv2D(64, (3, 3), activation='relu'),
    layers.BatchNormalization(),
    layers.MaxPooling2D((2, 2)),
    layers.Dropout(0.25),

    layers.Conv2D(128, (3, 3), activation='relu'),
    layers.BatchNormalization(),
    layers.MaxPooling2D((2, 2)),
    layers.Dropout(0.4),

    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.BatchNormalization(),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax')
])

# Compile model
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train model with validation split
history = model.fit(X_train, y_train, epochs=30, batch_size=64, validation_split=0.2, verbose=2)

# Evaluate on test set
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=0)

print(f'Test accuracy: {test_acc * 100:.2f}%', f'Test loss: {test_loss:.4f}')
Added dropout layers after convolutional and dense layers to reduce overfitting.
Added batch normalization layers to stabilize and speed up training.
Increased dropout rates progressively in deeper layers.
Kept the model complexity moderate with three convolutional blocks.
Used Adam optimizer with a learning rate of 0.001 and batch size of 64.
Results Interpretation

Before: Training accuracy was 98% but validation accuracy was only 72%, showing strong overfitting.

After: Training accuracy dropped to 90%, validation accuracy improved to 87%, and test accuracy reached 86.5%. Loss values also became more balanced.

Adding dropout and batch normalization helps reduce overfitting by preventing the model from memorizing training data and stabilizing learning, leading to better generalization on new data.
Bonus Experiment
Try using data augmentation techniques like random flips and rotations to further improve validation accuracy.
💡 Hint
Use TensorFlow's ImageDataGenerator or tf.image functions to apply augmentation during training.