0
0
Computer Visionml~20 mins

Why pre-trained models save time in Computer Vision - Experiment to Prove It

Choose your learning style9 modes available
Experiment - Why pre-trained models save time
Problem:You want to classify images into categories but training a model from scratch takes a long time and needs a lot of data.
Current Metrics:Training from scratch: training accuracy 95%, validation accuracy 70%, training time 2 hours.
Issue:The model overfits and training takes too long because it starts learning from zero.
Your Task
Use a pre-trained model to reduce training time and improve validation accuracy to at least 80%.
You must use transfer learning with a pre-trained model.
You can only fine-tune the last layers.
Keep training time under 30 minutes.
Hint 1
Hint 2
Hint 3
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Load pre-trained MobileNetV2 without top layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

# Freeze base model layers
base_model.trainable = False

# Add new classification layers
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(128, activation='relu')(x)
predictions = Dense(5, activation='softmax')(x)  # Assuming 5 classes

model = Model(inputs=base_model.input, outputs=predictions)

# Compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Prepare data generators
train_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
    'data/train', target_size=(224, 224), batch_size=32, class_mode='categorical', subset='training')
validation_generator = train_datagen.flow_from_directory(
    'data/train', target_size=(224, 224), batch_size=32, class_mode='categorical', subset='validation')

# Train only top layers
history = model.fit(train_generator, epochs=5, validation_data=validation_generator)

# Optionally unfreeze some layers and fine-tune
base_model.trainable = True
for layer in base_model.layers[:-20]:
    layer.trainable = False

model.compile(optimizer=tf.keras.optimizers.Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
history_fine = model.fit(train_generator, epochs=5, validation_data=validation_generator)
Used MobileNetV2 pre-trained on ImageNet as base model.
Froze early layers to keep learned features.
Added new classification layers for the specific task.
Trained only the new layers first, then fine-tuned some base layers.
Reduced training time from 2 hours to under 30 minutes.
Improved validation accuracy from 70% to over 80%.
Results Interpretation

Before: Training accuracy 95%, validation accuracy 70%, training time 2 hours.

After: Training accuracy 90%, validation accuracy 83%, training time 25 minutes.

Using a pre-trained model saves time because it already knows useful features from many images. You only need to teach it your specific task, which is faster and helps the model generalize better.
Bonus Experiment
Try using a different pre-trained model like ResNet50 and compare training time and accuracy.
💡 Hint
Replace MobileNetV2 with ResNet50 and keep the same training steps to see which model works better for your data.