Gradient descent is about finding the lowest point on a hill, which means minimizing a loss function. The key metric here is the loss value itself, such as Mean Squared Error (MSE) for regression or Cross-Entropy Loss for classification.
We want the loss to get smaller with each step, showing the model is learning. Tracking loss reduction over time tells us if gradient descent is working well.
Sometimes, we also look at training accuracy or validation accuracy to see if the model is improving in real terms, but loss is the main guide for gradient descent.