0
0
TensorFlowml~5 mins

Freezing and unfreezing layers in TensorFlow

Choose your learning style9 modes available
Introduction

Freezing layers means stopping some parts of a model from learning. Unfreezing lets those parts learn again. This helps save time and keep good knowledge when training.

When using a pre-trained model and you want to keep some learned features fixed.
When training a big model step-by-step to avoid losing earlier learned skills.
When you want to speed up training by not updating all layers.
When fine-tuning a model on new data but keeping some layers stable.
Syntax
TensorFlow
layer.trainable = False  # freeze layer
layer.trainable = True   # unfreeze layer

Set trainable before compiling the model.

Freezing means weights won't change during training.

Examples
This freezes all layers in the model so none will update.
TensorFlow
for layer in model.layers:
    layer.trainable = False
This freezes only the first layer of the model.
TensorFlow
model.layers[0].trainable = False
This unfreezes the last three layers to allow training.
TensorFlow
for layer in model.layers[-3:]:
    layer.trainable = True
Sample Model

This code builds a small model, freezes the first layer, trains on random data, and shows which layers trained and the final loss.

TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Create a simple model
model = models.Sequential([
    layers.Dense(10, activation='relu', input_shape=(5,)),
    layers.Dense(1)
])

# Freeze the first layer
model.layers[0].trainable = False

# Compile the model
model.compile(optimizer='adam', loss='mse')

# Create dummy data
import numpy as np
x = np.random.random((10, 5))
y = np.random.random((10, 1))

# Train the model
history = model.fit(x, y, epochs=2, verbose=0)

# Check which layers are trainable
trainable_status = [layer.trainable for layer in model.layers]

print('Trainable layers:', trainable_status)
print('Loss after training:', history.history['loss'][-1])
OutputSuccess
Important Notes

Always set trainable before calling compile().

Freezing layers helps keep learned features and reduces training time.

You can selectively freeze or unfreeze layers depending on your task.

Summary

Freezing stops layers from learning to keep their knowledge.

Unfreezing allows layers to update weights again.

Set trainable before compiling the model.