0
0
TensorFlowml~20 mins

Training history and visualization in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Training history and visualization
Problem:You trained a neural network on a small image dataset to classify images into 3 categories. The model runs for 20 epochs.
Current Metrics:Training accuracy: 95%, Validation accuracy: 80%, Training loss: 0.15, Validation loss: 0.45
Issue:The model shows signs of overfitting: training accuracy is much higher than validation accuracy, and validation loss is higher than training loss.
Your Task
Visualize the training and validation accuracy and loss over epochs to better understand the model's learning behavior.
Use TensorFlow and Matplotlib only.
Plot both accuracy and loss on separate graphs.
Include legends and axis labels for clarity.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf
import matplotlib.pyplot as plt

# Load example dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Preprocess data
x_train, x_test = x_train / 255.0, x_test / 255.0

# Use only 3 classes for simplicity
train_filter = y_train < 3
test_filter = y_test < 3
x_train, y_train = x_train[train_filter], y_train[train_filter]
x_test, y_test = x_test[test_filter], y_test[test_filter]

# Build simple model
model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(3, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train model and save history
history = model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test))

# Plot training & validation accuracy values
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

# Plot training & validation loss values
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.tight_layout()
plt.show()
Added code to plot training and validation accuracy over epochs.
Added code to plot training and validation loss over epochs.
Used matplotlib to create clear, labeled graphs for visualization.
Results Interpretation

Before visualization: You only had numbers showing training and validation accuracy and loss.

After visualization: You see clear graphs showing training accuracy steadily increasing and validation accuracy plateauing or fluctuating, indicating overfitting. Loss graphs show training loss decreasing smoothly while validation loss decreases less or increases.

Visualizing training history helps you understand how your model learns over time and identify issues like overfitting or underfitting. This guides you to improve your model effectively.
Bonus Experiment
Try adding dropout layers to the model and visualize the new training history to see if overfitting reduces.
💡 Hint
Add tf.keras.layers.Dropout(0.5) after the Dense layer and retrain the model.