0
0
Computer Visionml~20 mins

Handwriting recognition basics in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Handwriting recognition basics
Problem:Recognize handwritten digits from images using a simple neural network.
Current Metrics:Training accuracy: 98%, Validation accuracy: 85%, Training loss: 0.05, Validation loss: 0.45
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting so that validation accuracy improves to at least 90% while keeping training accuracy below 95%.
You can only change the model architecture and training parameters.
Do not change the dataset or preprocessing steps.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Normalize images
X_train, X_test = X_train / 255.0, X_test / 255.0

# Reshape for the model
X_train = X_train.reshape(-1, 28, 28, 1)
X_test = X_test.reshape(-1, 28, 28, 1)

# One-hot encode labels
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Build model with dropout and smaller layers
model = models.Sequential([
    layers.Conv2D(16, (3,3), activation='relu', input_shape=(28,28,1)),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.25),
    layers.Conv2D(32, (3,3), activation='relu'),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.25),
    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax')
])

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Use early stopping
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

history = model.fit(X_train, y_train, epochs=30, batch_size=64, validation_split=0.2, callbacks=[early_stop])

# Evaluate on test data
loss, accuracy = model.evaluate(X_test, y_test)

print(f'Test accuracy: {accuracy*100:.2f}%', f'Test loss: {loss:.4f}')
Added dropout layers after convolution and dense layers to reduce overfitting.
Reduced number of filters in convolution layers and neurons in dense layer to simplify the model.
Added early stopping to stop training when validation loss stops improving.
Used a moderate learning rate with Adam optimizer for stable training.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 85%, Training loss 0.05, Validation loss 0.45

After: Training accuracy 93%, Validation accuracy 91%, Training loss 0.18, Validation loss 0.25

Adding dropout and simplifying the model reduces overfitting, improving validation accuracy and making the model generalize better to new data.
Bonus Experiment
Try using data augmentation to increase the variety of training images and see if validation accuracy improves further.
💡 Hint
Use image transformations like rotation, zoom, and shifts to create new training samples on the fly.