When training a model, we watch metrics like loss and accuracy over time (epochs). Loss shows how well the model fits the data; lower is better. Accuracy shows how many predictions are correct; higher is better. Visualizing these helps us see if the model is learning or stuck.
Training history and visualization in TensorFlow - Model Metrics & Evaluation
Training history visualization usually shows line charts of loss and accuracy for both training and validation sets over epochs.
Epoch | Train Loss | Val Loss | Train Acc | Val Acc
-----------------------------------------------
1 | 0.65 | 0.70 | 0.60 | 0.58
2 | 0.50 | 0.55 | 0.75 | 0.70
3 | 0.40 | 0.45 | 0.82 | 0.78
... | ... | ... | ... | ...
This table is often shown as a line graph with epochs on the x-axis and metric values on the y-axis.
While training history focuses on loss and accuracy, precision and recall are also important metrics to track, especially for imbalanced data. Sometimes improving precision lowers recall and vice versa. Watching training history helps us decide if the model is improving overall or just memorizing.
For example, if validation loss stops improving but training loss keeps dropping, the model might be overfitting, hurting recall or precision on new data.
Good: Training and validation loss both decrease smoothly and stabilize close together. Accuracy improves steadily on both sets.
Bad: Training loss keeps dropping but validation loss rises (overfitting). Accuracy on validation stays low or fluctuates wildly (underfitting or data issues).
- Ignoring validation metrics: Only watching training loss can hide overfitting.
- Misinterpreting fluctuations: Small ups and downs are normal; don't panic early.
- Not using early stopping: Without it, model may overfit after many epochs.
- Data leakage: If validation data leaks into training, metrics look too good but model fails in real use.
No, this suggests overfitting. The model fits training data well (high accuracy) but performs worse on new data (rising validation loss). You should stop training earlier or use regularization.