0
0
TensorFlowml~5 mins

Why transfer learning saves time and data in TensorFlow

Choose your learning style9 modes available
Introduction

Transfer learning helps us use knowledge from one task to quickly learn another task. This saves time and needs less data.

When you have a small dataset but want good results.
When training a model from scratch takes too long.
When you want to use a model trained on a similar problem.
When you want to improve accuracy by starting from a strong base.
When you want to save computing resources.
Syntax
TensorFlow
base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet')
base_model.trainable = False

model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

Set weights='imagenet' to load a model pre-trained on a large dataset.

Freeze the base model layers by setting trainable = False to keep learned features.

Examples
Load ResNet50 pre-trained on ImageNet and freeze it to reuse features.
TensorFlow
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
base_model.trainable = False
Load VGG16 pre-trained on ImageNet but allow fine-tuning by setting trainable to True.
TensorFlow
base_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False)
base_model.trainable = True
Sample Model

This example shows how to use transfer learning with MobileNetV2. We freeze the base model to keep its learned features and add new layers for a new task. We train on small dummy data and print predictions.

TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Load pre-trained MobileNetV2 without top layers
base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet')
base_model.trainable = False

# Add new layers for binary classification
model = models.Sequential([
    base_model,
    layers.GlobalAveragePooling2D(),
    layers.Dense(1, activation='sigmoid')
])

# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Create dummy data (10 images, 224x224 RGB) and labels
import numpy as np
x_train = np.random.random((10,224,224,3))
y_train = np.random.randint(0,2,(10,1))

# Train model for 2 epochs
history = model.fit(x_train, y_train, epochs=2, verbose=2)

# Make predictions
preds = model.predict(x_train[:2])
print('Predictions:', preds.flatten())
OutputSuccess
Important Notes

Transfer learning works best when the new task is similar to the original task.

Freezing the base model saves training time and prevents losing learned features.

You can later unfreeze some layers to fine-tune the model for better accuracy.

Summary

Transfer learning reuses knowledge from a pre-trained model.

It saves time and needs less data by freezing learned layers.

It is useful when you have limited data or computing power.