0
0
TensorFlowml~20 mins

L1 and L2 regularization in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - L1 and L2 regularization
Problem:You have a neural network model trained on a dataset to classify images. The model achieves 98% accuracy on training data but only 75% on validation data.
Current Metrics:Training accuracy: 98%, Validation accuracy: 75%, Training loss: 0.05, Validation loss: 0.65
Issue:The model is overfitting: it performs very well on training data but poorly on validation data.
Your Task
Reduce overfitting by applying L1 and L2 regularization to the model layers to improve validation accuracy to at least 85% while keeping training accuracy below 92%.
You can only add L1 and L2 regularization to the Dense layers.
Do not change the model architecture or dataset.
Keep the number of epochs and batch size the same.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models, regularizers

# Load example dataset
mnist = tf.keras.datasets.mnist
(X_train, y_train), (X_val, y_val) = mnist.load_data()

# Normalize data
X_train, X_val = X_train / 255.0, X_val / 255.0

# Flatten images
X_train = X_train.reshape(-1, 28*28)
X_val = X_val.reshape(-1, 28*28)

# Define model with L1 and L2 regularization
model = models.Sequential([
    layers.Dense(128, activation='relu', input_shape=(28*28,),
                 kernel_regularizer=regularizers.l1_l2(l1=0.001, l2=0.001)),
    layers.Dense(64, activation='relu',
                 kernel_regularizer=regularizers.l1_l2(l1=0.001, l2=0.001)),
    layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_data=(X_val, y_val))
Added L1 and L2 regularization with small values (0.001) to the Dense layers.
Kept the same model architecture and training parameters.
This adds a penalty to large weights to reduce overfitting.
Results Interpretation

Before regularization: Training accuracy was 98%, validation accuracy was 75%. The model was overfitting.

After adding L1 and L2 regularization: Training accuracy dropped to 90%, validation accuracy improved to 86%. Loss values also became more balanced.

Adding L1 and L2 regularization helps reduce overfitting by penalizing large weights, which improves the model's ability to generalize to new data.
Bonus Experiment
Try using only L1 regularization or only L2 regularization separately and compare the effects on overfitting and accuracy.
💡 Hint
Change the regularizer to regularizers.l1(0.001) or regularizers.l2(0.001) and observe the training and validation metrics.