0
0
Computer Visionml~20 mins

Face recognition concept in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Face recognition concept
Problem:We want to build a model that can recognize faces from images. Currently, the model trains well on the training data but performs poorly on new images.
Current Metrics:Training accuracy: 98%, Validation accuracy: 65%, Training loss: 0.05, Validation loss: 1.2
Issue:The model is overfitting: it learns the training faces too well but cannot generalize to new faces.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85% while keeping training accuracy below 95%.
You can only change model architecture and training hyperparameters.
Do not add more training data.
Keep the input image size and dataset fixed.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Data augmentation to help generalization
train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True,
    validation_split=0.2
)

val_datagen = ImageDataGenerator(
    rescale=1./255,
    validation_split=0.2
)

train_generator = train_datagen.flow_from_directory(
    'face_dataset/train',
    target_size=(128, 128),
    batch_size=32,
    class_mode='categorical',
    subset='training'
)

validation_generator = val_datagen.flow_from_directory(
    'face_dataset/train',
    target_size=(128, 128),
    batch_size=32,
    class_mode='categorical',
    subset='validation'
)

# Build a simpler CNN with dropout
model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(128,128,3)),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),
    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.25),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(train_generator.num_classes, activation='softmax')
])

model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

history = model.fit(
    train_generator,
    epochs=30,
    validation_data=validation_generator
)
Added dropout layers after convolution and dense layers to reduce overfitting.
Applied data augmentation to training images to increase data variety.
Reduced learning rate to 0.0005 for smoother training.
Simplified model architecture by limiting number of filters and layers.
Results Interpretation

Before: Training accuracy was 98% but validation accuracy was only 65%, showing overfitting.

After: Training accuracy dropped to 92% but validation accuracy improved to 87%, indicating better generalization.

Adding dropout and data augmentation helps the model not memorize training data and perform better on new images, reducing overfitting.
Bonus Experiment
Try using a pretrained model like MobileNetV2 as a feature extractor and fine-tune it for face recognition.
💡 Hint
Use transfer learning by freezing early layers of MobileNetV2 and training only the last layers on your dataset.