0
0
Computer Visionml~20 mins

CV applications (autonomous driving, medical, retail) in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - CV applications (autonomous driving, medical, retail)
Problem:You have a computer vision model trained to classify images from autonomous driving, medical imaging, and retail product photos. The model currently performs very well on training data with 98% accuracy but only achieves 75% accuracy on validation data.
Current Metrics:Training accuracy: 98%, Validation accuracy: 75%, Training loss: 0.05, Validation loss: 0.65
Issue:The model is overfitting. It learns the training data too well but does not generalize to new images.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85% while keeping training accuracy below 95%.
You can only change model architecture and training hyperparameters.
You cannot add more data or use external datasets.
You must keep the same dataset split.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models

# Load dataset (placeholder, replace with actual data loading)
# X_train, y_train, X_val, y_val = load_data()

model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(64,64,3)),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),
    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(3, activation='softmax')
])

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_val, y_val), callbacks=[early_stop])
Added dropout layers after convolution and dense layers to reduce overfitting.
Reduced learning rate from default to 0.0005 for smoother training.
Added early stopping to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 75%, Training loss 0.05, Validation loss 0.65

After: Training accuracy 92%, Validation accuracy 87%, Training loss 0.20, Validation loss 0.35

Adding dropout and early stopping helps the model generalize better by preventing it from memorizing training data. Lower learning rate allows the model to learn more carefully, improving validation accuracy and reducing overfitting.
Bonus Experiment
Try using data augmentation techniques like random flips and rotations to improve validation accuracy further.
💡 Hint
Use TensorFlow's ImageDataGenerator or tf.image functions to create augmented images during training.