0
0
TensorFlowml~5 mins

Early stopping in TensorFlow

Choose your learning style9 modes available
Introduction

Early stopping helps stop training a model when it stops improving. This saves time and avoids making the model too focused on training data.

When training a model and you want to avoid overfitting.
When training takes a long time and you want to save resources.
When you want to automatically find the best number of training steps.
When you monitor validation performance to stop training early.
When you want to improve model generalization on new data.
Syntax
TensorFlow
tf.keras.callbacks.EarlyStopping(
    monitor='val_loss',
    min_delta=0.0,
    patience=0,
    verbose=0,
    mode='auto',
    baseline=None,
    restore_best_weights=False
)

monitor: What metric to watch (like validation loss).

patience: How many epochs to wait for improvement before stopping.

Examples
Stop training if validation loss does not improve for 3 epochs.
TensorFlow
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
Stop training if validation accuracy does not improve for 5 epochs.
TensorFlow
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5, mode='max')
Stop early and restore the model weights from the best epoch.
TensorFlow
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)
Sample Model

This code trains a simple model on random data. It uses early stopping to stop if validation loss does not improve for 2 epochs. It also restores the best weights found during training.

TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Create simple model
model = models.Sequential([
    layers.Dense(10, activation='relu', input_shape=(5,)),
    layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Generate dummy data
import numpy as np
x_train = np.random.random((100, 5))
y_train = np.random.randint(2, size=(100, 1))
x_val = np.random.random((20, 5))
y_val = np.random.randint(2, size=(20, 1))

# Setup early stopping
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)

# Train model with early stopping
history = model.fit(
    x_train, y_train,
    epochs=20,
    validation_data=(x_val, y_val),
    callbacks=[early_stop],
    verbose=2
)

# Print final epoch and best val_loss
final_epoch = len(history.history['loss'])
best_val_loss = min(history.history['val_loss'])
print(f"Training stopped after {final_epoch} epochs")
print(f"Best validation loss: {best_val_loss:.4f}")
OutputSuccess
Important Notes

Early stopping helps prevent overfitting by stopping training when the model stops improving on validation data.

Setting restore_best_weights=True returns the model to the best state found during training.

Choose monitor metric carefully based on your goal (loss or accuracy).

Summary

Early stopping stops training when validation performance stops improving.

It saves time and helps the model generalize better.

Use patience to control how long to wait before stopping.