Transfer learning helps us use knowledge from one task to quickly learn another task. This saves time and needs less data.
Why transfer learning saves time and data in TensorFlow
base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet') base_model.trainable = False model = tf.keras.Sequential([ base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(1, activation='sigmoid') ])
Set weights='imagenet' to load a model pre-trained on a large dataset.
Freeze the base model layers by setting trainable = False to keep learned features.
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False) base_model.trainable = False
base_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False) base_model.trainable = True
This example shows how to use transfer learning with MobileNetV2. We freeze the base model to keep its learned features and add new layers for a new task. We train on small dummy data and print predictions.
import tensorflow as tf from tensorflow.keras import layers, models # Load pre-trained MobileNetV2 without top layers base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet') base_model.trainable = False # Add new layers for binary classification model = models.Sequential([ base_model, layers.GlobalAveragePooling2D(), layers.Dense(1, activation='sigmoid') ]) # Compile model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Create dummy data (10 images, 224x224 RGB) and labels import numpy as np x_train = np.random.random((10,224,224,3)) y_train = np.random.randint(0,2,(10,1)) # Train model for 2 epochs history = model.fit(x_train, y_train, epochs=2, verbose=2) # Make predictions preds = model.predict(x_train[:2]) print('Predictions:', preds.flatten())
Transfer learning works best when the new task is similar to the original task.
Freezing the base model saves training time and prevents losing learned features.
You can later unfreeze some layers to fine-tune the model for better accuracy.
Transfer learning reuses knowledge from a pre-trained model.
It saves time and needs less data by freezing learned layers.
It is useful when you have limited data or computing power.