0
0
Computer Visionml~20 mins

What computer vision encompasses - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - What computer vision encompasses
Problem:You want to understand what computer vision can do by building a simple image classifier that recognizes handwritten digits.
Current Metrics:Training accuracy: 98%, Validation accuracy: 85%
Issue:The model shows overfitting: training accuracy is high but validation accuracy is much lower.
Your Task
Reduce overfitting so validation accuracy improves to above 90% while keeping training accuracy below 95%.
You can only change the model architecture and training parameters.
You cannot change the dataset or add new data.
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Normalize data
X_train = X_train.reshape(-1, 28, 28, 1) / 255.0
X_test = X_test.reshape(-1, 28, 28, 1) / 255.0

y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Build model with dropout to reduce overfitting
model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.25),
    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.25),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Use early stopping
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

history = model.fit(X_train, y_train, epochs=30, batch_size=64, validation_split=0.2, callbacks=[early_stop])

# Evaluate
loss, accuracy = model.evaluate(X_test, y_test)
print(f'Test accuracy: {accuracy * 100:.2f}%')
Added dropout layers after convolution and dense layers to reduce overfitting.
Added early stopping to stop training when validation loss stops improving.
Used batch size of 64 for stable training.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 85% (overfitting)

After: Training accuracy 93%, Validation accuracy 91% (better generalization)

Adding dropout and early stopping helps reduce overfitting, improving the model's ability to recognize new images accurately. This shows how computer vision models can be improved to work well on real-world data.
Bonus Experiment
Try using data augmentation to create more varied training images and see if validation accuracy improves further.
💡 Hint
Use Keras ImageDataGenerator to rotate, zoom, or flip images during training.