0
0
TensorFlowml~5 mins

L1 and L2 regularization in TensorFlow - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is L1 regularization in machine learning?
L1 regularization adds the sum of the absolute values of the model's weights to the loss. It helps make the model simpler by pushing some weights to zero, which can remove unimportant features.
Click to reveal answer
beginner
What does L2 regularization do to a model's weights?
L2 regularization adds the sum of the squares of the model's weights to the loss. It encourages smaller weights overall but does not force them to zero, helping the model avoid overfitting.
Click to reveal answer
intermediate
How do L1 and L2 regularization help prevent overfitting?
Both add a penalty to the loss based on the size of weights. This penalty discourages the model from fitting noise in the training data, making it generalize better to new data.
Click to reveal answer
beginner
Show a simple TensorFlow code snippet to add L2 regularization to a Dense layer.
import tensorflow as tf
from tensorflow.keras import layers, regularizers

model = tf.keras.Sequential([
    layers.Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01)),
    layers.Dense(1)
])
Click to reveal answer
intermediate
What is the main difference in the effect of L1 vs L2 regularization on model weights?
L1 regularization can make some weights exactly zero, effectively selecting features. L2 regularization makes weights smaller but usually keeps them all nonzero.
Click to reveal answer
Which regularization method can lead to sparse models by setting some weights to zero?
ADropout
BL2 regularization
CL1 regularization
DBatch normalization
What does L2 regularization add to the loss function?
ASum of absolute weights
BSum of weight inverses
CSum of weights
DSum of squared weights
In TensorFlow, which argument adds L2 regularization to a Dense layer?
Abias_regularizer
Bkernel_regularizer
Cactivity_regularizer
Ddropout_rate
Why do we use regularization in machine learning models?
ATo prevent overfitting
BTo reduce dataset size
CTo speed up training
DTo increase model complexity
Which regularization method is more likely to keep all weights small but nonzero?
AL2 regularization
BEarly stopping
CL1 regularization
DData augmentation
Explain in your own words how L1 and L2 regularization help a model avoid overfitting.
Think about how adding extra cost to big weights changes the model.
You got /4 concepts.
    Write a short TensorFlow code example showing how to add L1 regularization to a Dense layer.
    Use tensorflow.keras.layers and regularizers.l1 with a small value.
    You got /4 concepts.