0
0
Computer Visionml~20 mins

EfficientNet scaling in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - EfficientNet scaling
Problem:You want to classify images using EfficientNet, but your current model is too large and overfits the training data.
Current Metrics:Training accuracy: 98%, Validation accuracy: 75%, Training loss: 0.05, Validation loss: 0.85
Issue:The model overfits: training accuracy is very high but validation accuracy is much lower, indicating poor generalization.
Your Task
Reduce overfitting by applying EfficientNet scaling principles to balance model size and accuracy, aiming for validation accuracy >85% with training accuracy <92%.
You can only adjust the EfficientNet model scaling parameters (width, depth, resolution).
Do not change the dataset or training procedure (optimizer, epochs, batch size).
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model

# Load EfficientNetB0 base model with input shape 224x224x3
base_model = EfficientNetB0(include_top=False, input_shape=(224, 224, 3), weights='imagenet')

# Freeze base model layers to reduce overfitting
base_model.trainable = False

# Add classification head
x = base_model.output
x = GlobalAveragePooling2D()(x)
outputs = Dense(10, activation='softmax')(x)  # Assuming 10 classes
model = Model(inputs=base_model.input, outputs=outputs)

# Compile model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Assume X_train, y_train, X_val, y_val are preloaded datasets
# Train with frozen base model
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_data=(X_val, y_val))

# Unfreeze some layers for fine-tuning with lower learning rate
base_model.trainable = True
for layer in base_model.layers[:-20]:
    layer.trainable = False

model.compile(optimizer=tf.keras.optimizers.Adam(1e-5), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

history_finetune = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_data=(X_val, y_val))
Reduced input image resolution to 224x224 to balance computation and accuracy.
Used EfficientNetB0 (smallest EfficientNet) instead of larger variants to reduce model size.
Froze base model layers initially to prevent overfitting and trained only classification head.
Fine-tuned last 20 layers with a low learning rate to improve validation accuracy without overfitting.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 75%, high overfitting.

After: Training accuracy 90%, Validation accuracy 87%, better balance and generalization.

Scaling EfficientNet properly and freezing layers helps reduce overfitting and improves validation accuracy by balancing model complexity and training.
Bonus Experiment
Try increasing input image resolution to 260x260 and use EfficientNetB1 to see if validation accuracy improves further without overfitting.
💡 Hint
Increase resolution and model size carefully; monitor validation loss to avoid overfitting.