L1 and L2 regularization help prevent overfitting by adding a penalty to large weights in the model. To check if regularization works well, we look at validation loss and validation accuracy. If validation loss decreases and accuracy improves or stays stable, regularization is helping the model generalize better to new data.
We also watch training loss and training accuracy. If training loss is low but validation loss is high, the model is overfitting. Regularization aims to reduce this gap.