0
0
Computer Visionml~5 mins

Fine-tuning approach in Computer Vision

Choose your learning style9 modes available
Introduction

Fine-tuning helps a model learn new tasks faster by starting from a model that already knows something similar.

You want to teach a model to recognize new types of images but have only a small dataset.
You want to improve an existing model's accuracy on a specific task.
You want to save time and computing power by not training a model from scratch.
You want to adapt a general model to a special use case, like medical images.
You want to use a pre-trained model as a starting point for your own project.
Syntax
Computer Vision
1. Load a pre-trained model.
2. Freeze some layers to keep old knowledge.
3. Replace or add new layers for your task.
4. Train the new layers on your data.
5. Optionally unfreeze some layers and train more.

Freezing layers means their weights do not change during training.

Replacing the last layer is common to match the number of classes in your task.

Examples
This example loads MobileNetV2 pre-trained on ImageNet, freezes all layers, and adds a new output layer for 5 classes.
Computer Vision
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model

base_model = MobileNetV2(weights='imagenet', include_top=False, pooling='avg')
for layer in base_model.layers:
    layer.trainable = False

output = Dense(5, activation='softmax')(base_model.output)
model = Model(inputs=base_model.input, outputs=output)
This example unfreezes the last 10 layers to fine-tune them on new data.
Computer Vision
for layer in model.layers[-10:]:
    layer.trainable = True

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_data, epochs=5)
Sample Model

This program shows how to fine-tune a pre-trained MobileNetV2 model on a small dummy dataset with 5 classes. It first trains only the new output layer, then unfreezes some layers to improve learning.

Computer Vision
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import numpy as np

# Create dummy data: 100 images 96x96x3, 5 classes
x_train = np.random.rand(100, 96, 96, 3).astype('float32')
y_train = to_categorical(np.random.randint(5, size=100), num_classes=5)

# Load pre-trained MobileNetV2 without top layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(96,96,3), pooling='avg')

# Freeze base model layers
for layer in base_model.layers:
    layer.trainable = False

# Add new output layer for 5 classes
output = Dense(5, activation='softmax')(base_model.output)
model = Model(inputs=base_model.input, outputs=output)

# Compile model
model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])

# Train only new layers
history = model.fit(x_train, y_train, epochs=3, batch_size=10, verbose=2)

# Unfreeze last 20 layers for fine-tuning
for layer in base_model.layers[-20:]:
    layer.trainable = True

# Recompile with lower learning rate
model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy'])

# Continue training
history_fine = model.fit(x_train, y_train, epochs=2, batch_size=10, verbose=2)
OutputSuccess
Important Notes

Fine-tuning works best when your new task is similar to the original task the model was trained on.

Start by training only new layers, then gradually unfreeze more layers to avoid losing old knowledge.

Use a smaller learning rate when fine-tuning to make small adjustments.

Summary

Fine-tuning reuses a pre-trained model to learn new tasks faster.

Freeze old layers first, then train new layers, and finally unfreeze some layers to improve.

Use smaller learning rates during fine-tuning for better results.