Which statement correctly describes the main difference between L1 and L2 regularization in machine learning models?
Think about which regularization method helps with feature selection by removing some features completely.
L1 regularization adds absolute value penalties and can zero out weights, creating sparse models. L2 adds squared penalties and shrinks weights but keeps them nonzero.
What will be the shape of the weights after creating a Dense layer with 5 units and L2 regularization in TensorFlow?
import tensorflow as tf layer = tf.keras.layers.Dense(5, kernel_regularizer=tf.keras.regularizers.l2(0.01), input_shape=(10,)) model = tf.keras.Sequential([layer]) model.build() weights_shape = model.layers[0].kernel.shape print(weights_shape)
Remember the Dense layer shape is (input_features, output_units).
The Dense layer's kernel weights shape is (input_dim, units). Here input_dim=10 and units=5, so shape is (10, 5).
You train a neural network and notice it overfits the training data. Which change to the regularization parameter lambda (strength) will most likely reduce overfitting?
Think about how regularization controls model complexity and overfitting.
Increasing lambda increases penalty on weights, encouraging simpler models and reducing overfitting.
After training a model with L1 regularization, which metric would best show that many weights are exactly zero?
Sparsity means many weights are zero. Which metric directly measures that?
Counting how many weights are exactly zero directly measures sparsity, which L1 regularization encourages.
Consider this TensorFlow code snippet that tries to apply both L1 and L2 regularization to a Dense layer. What error will it raise?
import tensorflow as tf layer = tf.keras.layers.Dense(4, kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.01, l2=0.01)) model = tf.keras.Sequential([layer]) model.build(input_shape=(None, 8)) print(model.layers[0].kernel_regularizer.l1)
Check the attributes available on the l1_l2 regularizer object.
The l1_l2 regularizer object does not expose 'l1' or 'l2' attributes directly, so accessing them causes AttributeError.