0
0
TensorFlowml~20 mins

Freezing and unfreezing layers in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Freezing and unfreezing layers
Problem:You want to improve a neural network's performance on a small image dataset by using transfer learning. Currently, the model is trained by fine-tuning all layers from the start.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.65
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting by freezing the base model layers initially and then unfreezing some layers later. Target validation accuracy > 80% with training accuracy < 90%.
Use TensorFlow and Keras.
Start by freezing the base model layers.
Unfreeze only the top 20% of layers after initial training.
Do not change the dataset or model architecture.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras import layers, models, optimizers

# Load base model with pretrained weights
base_model = MobileNetV2(input_shape=(128, 128, 3), include_top=False, weights='imagenet')

# Freeze all base model layers
for layer in base_model.layers:
    layer.trainable = False

# Add classification head
model = models.Sequential([
    base_model,
    layers.GlobalAveragePooling2D(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax')
])

# Compile model
model.compile(optimizer=optimizers.Adam(learning_rate=0.001),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Assume train_ds and val_ds are prepared tf.data datasets
# Phase 1: Train with frozen base
history1 = model.fit(train_ds, epochs=5, validation_data=val_ds)

# Unfreeze top 20% layers of base model
num_layers = len(base_model.layers)
num_to_unfreeze = int(num_layers * 0.2)
for layer in base_model.layers[-num_to_unfreeze:]:
    layer.trainable = True

# Compile again with lower learning rate
model.compile(optimizer=optimizers.Adam(learning_rate=0.0001),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Phase 2: Fine-tune
history2 = model.fit(train_ds, epochs=5, validation_data=val_ds)

# Output final metrics
final_train_acc = history2.history['accuracy'][-1] * 100
final_val_acc = history2.history['val_accuracy'][-1] * 100
final_train_loss = history2.history['loss'][-1]
final_val_loss = history2.history['val_loss'][-1]

print(f'Final training accuracy: {final_train_acc:.2f}%')
print(f'Final validation accuracy: {final_val_acc:.2f}%')
print(f'Final training loss: {final_train_loss:.4f}')
print(f'Final validation loss: {final_val_loss:.4f}')
Initially froze all layers of the pretrained base model to prevent overfitting.
Added a classification head on top of the frozen base.
Trained the model with frozen base layers for 5 epochs.
Unfroze the top 20% of base model layers to allow fine-tuning.
Lowered the learning rate during fine-tuning to avoid large weight updates.
Trained the model again for 5 epochs with partial unfreezing.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, Training loss 0.15, Validation loss 0.65

After: Training accuracy 88%, Validation accuracy 82%, Training loss 0.25, Validation loss 0.40

Freezing pretrained layers initially helps prevent overfitting by keeping learned features stable. Later unfreezing some layers with a lower learning rate allows the model to adapt better to new data, improving validation accuracy.
Bonus Experiment
Try unfreezing different percentages of the base model layers (e.g., 10%, 50%) and observe how validation accuracy changes.
💡 Hint
Unfreeze fewer layers to reduce overfitting or more layers to increase model flexibility, but watch for training instability.