When we split data into train, validation, and test sets, the main goal is to check how well our model learns and how well it will work on new data.
Training metrics (like loss and accuracy) show how well the model learns from the training data.
Validation metrics help us tune the model and avoid overfitting by checking performance on unseen data during training.
Test metrics give a final unbiased estimate of how the model will perform in the real world.
So, the key metrics are accuracy, loss, precision, recall, or others depending on the task, but measured separately on train, val, and test sets to understand model behavior.