Early stopping helps stop training a model when it stops improving. This saves time and avoids making the model too focused on training data.
Early stopping in TensorFlow
tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0.0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False
)monitor: What metric to watch (like validation loss).
patience: How many epochs to wait for improvement before stopping.
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5, mode='max')
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)
This code trains a simple model on random data. It uses early stopping to stop if validation loss does not improve for 2 epochs. It also restores the best weights found during training.
import tensorflow as tf from tensorflow.keras import layers, models # Create simple model model = models.Sequential([ layers.Dense(10, activation='relu', input_shape=(5,)), layers.Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Generate dummy data import numpy as np x_train = np.random.random((100, 5)) y_train = np.random.randint(2, size=(100, 1)) x_val = np.random.random((20, 5)) y_val = np.random.randint(2, size=(20, 1)) # Setup early stopping early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True) # Train model with early stopping history = model.fit( x_train, y_train, epochs=20, validation_data=(x_val, y_val), callbacks=[early_stop], verbose=2 ) # Print final epoch and best val_loss final_epoch = len(history.history['loss']) best_val_loss = min(history.history['val_loss']) print(f"Training stopped after {final_epoch} epochs") print(f"Best validation loss: {best_val_loss:.4f}")
Early stopping helps prevent overfitting by stopping training when the model stops improving on validation data.
Setting restore_best_weights=True returns the model to the best state found during training.
Choose monitor metric carefully based on your goal (loss or accuracy).
Early stopping stops training when validation performance stops improving.
It saves time and helps the model generalize better.
Use patience to control how long to wait before stopping.