When you freeze layers in a model, you keep some parts fixed and only train others. This helps when you have a small dataset or want to keep learned features. The key metrics to watch are validation loss and validation accuracy. They show if the model is learning new useful patterns without forgetting old ones.
If validation accuracy improves and validation loss decreases after unfreezing layers, it means the model is adapting well. If validation loss goes up or accuracy drops, the model might be overfitting or forgetting.