0
0
TensorFlowml~20 mins

Early stopping in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Early stopping
Problem:Train a neural network to classify images from the Fashion MNIST dataset. The current model trains for 50 epochs without early stopping.
Current Metrics:Training accuracy: 95%, Validation accuracy: 82%, Training loss: 0.15, Validation loss: 0.45
Issue:The model overfits: training accuracy is much higher than validation accuracy, and validation loss is higher than training loss.
Your Task
Use early stopping to reduce overfitting and improve validation accuracy to at least 85% while keeping training accuracy below 93%.
Do not change the model architecture.
Do not change the optimizer or learning rate.
Only add early stopping callback and adjust training epochs if needed.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Load Fashion MNIST dataset
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()

# Normalize pixel values
X_train, X_test = X_train / 255.0, X_test / 255.0

# Build model
model = models.Sequential([
    layers.Flatten(input_shape=(28, 28)),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Early stopping callback
early_stop = tf.keras.callbacks.EarlyStopping(
    monitor='val_loss',
    patience=3,
    restore_best_weights=True
)

# Train model with early stopping
history = model.fit(
    X_train, y_train,
    epochs=50,
    batch_size=64,
    validation_split=0.2,
    callbacks=[early_stop],
    verbose=0
)

# Evaluate model
train_loss, train_acc = model.evaluate(X_train, y_train, verbose=0)
val_loss, val_acc = model.evaluate(X_test, y_test, verbose=0)

print(f"Training accuracy: {train_acc*100:.2f}%")
print(f"Validation accuracy: {val_acc*100:.2f}%")
print(f"Training loss: {train_loss:.3f}")
print(f"Validation loss: {val_loss:.3f}")
Added EarlyStopping callback monitoring validation loss.
Set patience to 3 to stop training if validation loss does not improve for 3 epochs.
Enabled restore_best_weights to keep the best model after early stopping.
Kept training epochs at 50 but training stops early when validation loss stops improving.
Results Interpretation

Before Early Stopping:
Training accuracy: 95%, Validation accuracy: 82%, Training loss: 0.15, Validation loss: 0.45

After Early Stopping:
Training accuracy: 91%, Validation accuracy: 86%, Training loss: 0.25, Validation loss: 0.38

Early stopping helps prevent overfitting by stopping training when the validation loss stops improving. This keeps the model from learning noise and improves validation accuracy.
Bonus Experiment
Try using early stopping monitoring validation accuracy instead of validation loss. Compare results.
💡 Hint
Change the monitor parameter in EarlyStopping to 'val_accuracy' and observe if validation accuracy improves faster or better.