0
0
TensorFlowml~8 mins

L1 and L2 regularization in TensorFlow - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - L1 and L2 regularization
Which metric matters for L1 and L2 regularization and WHY

L1 and L2 regularization help prevent overfitting by adding a penalty to large weights in the model. To check if regularization works well, we look at validation loss and validation accuracy. If validation loss decreases and accuracy improves or stays stable, regularization is helping the model generalize better to new data.

We also watch training loss and training accuracy. If training loss is low but validation loss is high, the model is overfitting. Regularization aims to reduce this gap.

Confusion matrix example with regularization effect
Without regularization:
          Predicted
          P     N
Actual P 90    30
       N 20    60

With regularization:
          Predicted
          P     N
Actual P 85    25
       N 15    65

Total samples = 200

Here, regularization reduces false positives (FP) and false negatives (FN), improving model generalization.

Precision vs Recall tradeoff with L1 and L2 regularization

L1 regularization tends to produce sparse models by pushing some weights to zero, which can simplify the model and improve interpretability. This might slightly reduce recall but improve precision by focusing on important features.

L2 regularization spreads out the penalty, shrinking weights but rarely to zero. This often improves recall by keeping more features but may reduce precision if noisy features remain.

Choosing between L1 and L2 depends on whether you want a simpler model (L1) or a smoother model (L2). Both help balance precision and recall by reducing overfitting.

Good vs Bad metric values when using L1 and L2 regularization

Good: Validation loss close to training loss, stable or improved validation accuracy, and balanced precision and recall. For example, precision = 0.85, recall = 0.80, F1 = 0.82.

Bad: Large gap between training and validation loss (overfitting), very low recall or precision, or validation accuracy dropping after adding regularization. For example, precision = 0.95 but recall = 0.30 means the model misses many true positives.

Common pitfalls with metrics and regularization
  • Accuracy paradox: High accuracy can be misleading if data is imbalanced. Regularization might not fix this alone.
  • Data leakage: If validation data leaks into training, metrics look better but model won't generalize.
  • Over-regularization: Too strong L1 or L2 can underfit, causing high training and validation loss.
  • Ignoring metric tradeoffs: Focusing only on accuracy without checking precision and recall can hide problems.
Self-check question

Your model has 98% accuracy but 12% recall on fraud detection. Is it good for production? Why or why not?

Answer: No, it is not good. The very low recall means the model misses most fraud cases, which is dangerous. Even with high accuracy, the model fails to catch fraud, so it needs improvement, possibly by adjusting regularization or model design.

Key Result
Validation loss and accuracy show if L1/L2 regularization helps reduce overfitting and improves model generalization.