When working with padding and sequence length in NLP, the key metrics to watch are model accuracy and loss. These show how well the model learns from sequences of fixed length after padding. Padding adds extra tokens to make all sequences the same length, so the model can process batches efficiently.
However, too much padding can confuse the model and lower accuracy. So, monitoring validation loss helps check if padding is hurting learning. Also, sequence length affects training speed and memory use, so it's important to balance length and padding.