0
0
Computer Visionml~20 mins

Why face analysis is a core CV application in Computer Vision - Experiment to Prove It

Choose your learning style9 modes available
Experiment - Why face analysis is a core CV application
Problem:We want to build a simple face recognition model that can identify people from images. Currently, the model recognizes faces with 95% accuracy on training data but only 70% on new images.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.60
Issue:The model is overfitting. It performs very well on training images but poorly on new, unseen images.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85%, while keeping training accuracy below 92%.
You can only change model architecture and training hyperparameters.
You cannot add more training data.
You must keep the input image size and dataset fixed.
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Data augmentation to help generalize
train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True,
    validation_split=0.2
)

train_generator = train_datagen.flow_from_directory(
    'face_dataset/train',
    target_size=(64, 64),
    batch_size=32,
    class_mode='categorical',
    subset='training'
)

# Separate validation datagen without augmentation
val_datagen = ImageDataGenerator(
    rescale=1./255,
    validation_split=0.2
)

validation_generator = val_datagen.flow_from_directory(
    'face_dataset/train',
    target_size=(64, 64),
    batch_size=32,
    class_mode='categorical',
    subset='validation'
)

# Build model with dropout to reduce overfitting
model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(64, 64, 3)),
    layers.MaxPooling2D(2,2),
    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D(2,2),
    layers.Dropout(0.3),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(train_generator.num_classes, activation='softmax')
])

model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

history = model.fit(
    train_generator,
    epochs=30,
    validation_data=validation_generator
)
Added dropout layers after convolution and dense layers to reduce overfitting.
Applied data augmentation to training images to improve generalization.
Reduced learning rate from default 0.001 to 0.0005 for smoother training.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, high gap shows overfitting.

After: Training accuracy 90%, Validation accuracy 87%, gap reduced, model generalizes better.

Adding dropout and data augmentation helps the model learn features that work well on new images, reducing overfitting and improving validation accuracy. This shows why face analysis models need careful tuning to work well in real life.
Bonus Experiment
Try using transfer learning with a pre-trained face recognition model to improve accuracy further.
💡 Hint
Use a model like MobileNetV2 as a base and fine-tune it on your face dataset.