Choose the best explanation for why Dropout layers are used during training.
Think about how Dropout affects the network's learning to avoid memorizing the training data.
Dropout randomly turns off neurons during training, which helps the model generalize better by not relying too much on any single neuron.
Given the following TensorFlow code, what is the shape of output?
import tensorflow as tf input_tensor = tf.random.uniform((32, 10)) dropout_layer = tf.keras.layers.Dropout(0.5) output = dropout_layer(input_tensor, training=True) output_shape = output.shape print(output_shape)
Dropout does not change the shape of the input tensor.
Dropout randomly sets some values to zero but keeps the shape the same as the input.
Which option shows the best practice for placing Dropout layers in a feedforward neural network?
Dropout is usually applied between layers to reduce overfitting.
Dropout is commonly placed after Dense layers except the final output layer to avoid affecting predictions.
What is the most likely effect of increasing the dropout rate from 0.2 to 0.8 during training?
Think about what happens if too many neurons are turned off during training.
High dropout rates can cause the model to underfit because it loses too much information during training.
Consider this TensorFlow code snippet:
import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(10) ]) output_train = model(tf.random.uniform((1, 20)), training=True) output_infer = model(tf.random.uniform((1, 20)), training=False) print(output_train == output_infer)
Why might output_train and output_infer differ?
Recall when Dropout is applied during model use.
Dropout randomly disables neurons only during training. During inference, it passes inputs unchanged, so outputs differ.