When adjusting the learning rate for fine-tuning a model, the key metrics to watch are training loss and validation loss. These show how well the model is learning and if it is improving on new data.
A good learning rate helps the loss go down steadily without jumping around or getting stuck. If the learning rate is too high, loss may bounce or get worse. If too low, learning is slow and may stop early.
Also, watch validation accuracy to see if the model is improving on unseen data, which means fine-tuning is effective.