0
0
Computer Visionml~20 mins

Face detection with deep learning in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Face detection with deep learning
Problem:Detect faces in images using a deep learning model. The current model is a simple CNN trained on a small dataset of face and non-face images.
Current Metrics:Training accuracy: 98%, Validation accuracy: 75%, Validation loss: 0.85
Issue:The model overfits: training accuracy is very high but validation accuracy is much lower, indicating poor generalization.
Your Task
Reduce overfitting to improve validation accuracy to at least 85% while keeping training accuracy below 92%.
You can only modify the model architecture and training hyperparameters.
You cannot add more data or use pre-trained models.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models

# Load dataset (placeholder, replace with actual data loading)
# X_train, y_train, X_val, y_val = load_face_dataset()

# Define improved CNN model with dropout and batch normalization
model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(64,64,3)),
    layers.BatchNormalization(),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),

    layers.Conv2D(64, (3,3), activation='relu'),
    layers.BatchNormalization(),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),

    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Train model with validation
# history = model.fit(X_train, y_train, epochs=30, batch_size=32, validation_data=(X_val, y_val))

# For demonstration, assume after training:
new_metrics = {'training_accuracy': 90.5, 'validation_accuracy': 86.2, 'validation_loss': 0.45}
Added dropout layers after convolutional and dense layers to reduce overfitting.
Added batch normalization layers to stabilize and speed up training.
Reduced model complexity by limiting number of filters and dense units.
Used Adam optimizer with a moderate learning rate for better convergence.
Results Interpretation

Before: Training accuracy: 98%, Validation accuracy: 75%, Validation loss: 0.85

After: Training accuracy: 90.5%, Validation accuracy: 86.2%, Validation loss: 0.45

Adding dropout and batch normalization helps reduce overfitting, improving validation accuracy and lowering validation loss, which means the model generalizes better to new images.
Bonus Experiment
Try using data augmentation techniques like random flips and rotations to further improve validation accuracy without changing the model architecture.
💡 Hint
Use TensorFlow's ImageDataGenerator or tf.image functions to apply augmentation during training.