Recall & Review
beginner
What is L1 regularization in machine learning?
L1 regularization adds the sum of the absolute values of the model's weights to the loss. It helps make the model simpler by pushing some weights to zero, which can remove unimportant features.
Click to reveal answer
beginner
What does L2 regularization do to a model's weights?
L2 regularization adds the sum of the squares of the model's weights to the loss. It encourages smaller weights overall but does not force them to zero, helping the model avoid overfitting.
Click to reveal answer
intermediate
How do L1 and L2 regularization help prevent overfitting?
Both add a penalty to the loss based on the size of weights. This penalty discourages the model from fitting noise in the training data, making it generalize better to new data.
Click to reveal answer
beginner
Show a simple TensorFlow code snippet to add L2 regularization to a Dense layer.
import tensorflow as tf
from tensorflow.keras import layers, regularizers
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01)),
layers.Dense(1)
])Click to reveal answer
intermediate
What is the main difference in the effect of L1 vs L2 regularization on model weights?
L1 regularization can make some weights exactly zero, effectively selecting features. L2 regularization makes weights smaller but usually keeps them all nonzero.
Click to reveal answer
Which regularization method can lead to sparse models by setting some weights to zero?
✗ Incorrect
L1 regularization adds absolute value penalties that can push weights exactly to zero, creating sparsity.
What does L2 regularization add to the loss function?
✗ Incorrect
L2 regularization adds the sum of the squares of the weights to the loss.
In TensorFlow, which argument adds L2 regularization to a Dense layer?
✗ Incorrect
The kernel_regularizer argument applies regularization to the layer's weights.
Why do we use regularization in machine learning models?
✗ Incorrect
Regularization helps prevent overfitting by penalizing large weights.
Which regularization method is more likely to keep all weights small but nonzero?
✗ Incorrect
L2 regularization shrinks weights but usually does not make them exactly zero.
Explain in your own words how L1 and L2 regularization help a model avoid overfitting.
Think about how adding extra cost to big weights changes the model.
You got /4 concepts.
Write a short TensorFlow code example showing how to add L1 regularization to a Dense layer.
Use tensorflow.keras.layers and regularizers.l1 with a small value.
You got /4 concepts.