0
0
TensorFlowml~5 mins

Training history and visualization in TensorFlow

Choose your learning style9 modes available
Introduction
Training history helps us see how well a model learns over time. Visualization makes it easy to understand the model's progress and spot problems.
You want to check if your model is improving during training.
You want to compare training and validation performance to detect overfitting.
You want to decide when to stop training based on the learning curves.
You want to share model training results with others in a clear way.
Syntax
TensorFlow
history = model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))

import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='train loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.legend()
plt.show()
The fit() method returns a History object that stores training details.
history.history is a dictionary with keys like 'loss' and 'val_loss' for each epoch.
Examples
Train the model for 5 epochs without validation data.
TensorFlow
history = model.fit(x_train, y_train, epochs=5)
Train with 20% of training data used for validation.
TensorFlow
history = model.fit(x_train, y_train, epochs=10, validation_split=0.2)
Plot training and validation accuracy over epochs.
TensorFlow
plt.plot(history.history['accuracy'], label='train accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.legend()
plt.show()
Sample Model
This code trains a small neural network on the XOR problem and shows how loss and accuracy change over 20 epochs. It also prints the final accuracy.
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models
import matplotlib.pyplot as plt

# Prepare simple data: XOR problem
x_train = [[0,0],[0,1],[1,0],[1,1]]
y_train = [0,1,1,0]

# Build a small model
model = models.Sequential([
    layers.Dense(4, activation='relu', input_shape=(2,)),
    layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train model and save history
history = model.fit(x_train, y_train, epochs=20, verbose=0)

# Plot loss and accuracy
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history.history['loss'], label='loss')
plt.title('Training Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.subplot(1,2,2)
plt.plot(history.history['accuracy'], label='accuracy')
plt.title('Training Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

plt.tight_layout()
plt.show()

# Print final accuracy
final_acc = history.history['accuracy'][-1]
print(f'Final training accuracy: {final_acc:.2f}')
OutputSuccess
Important Notes
Validation data helps check if the model is learning patterns or just memorizing.
If training loss goes down but validation loss goes up, the model might be overfitting.
Plotting metrics after training gives a clear picture of model behavior.
Summary
Training history stores loss and accuracy for each epoch.
Visualizing history helps understand model learning and spot issues.
Use matplotlib to plot training and validation metrics easily.