0
0
TensorFlowml~20 mins

Dropout layers in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Dropout Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
What is the main purpose of a Dropout layer in a neural network?

Choose the best explanation for why Dropout layers are used during training.

ATo randomly deactivate some neurons during training to prevent overfitting.
BTo increase the size of the training dataset by duplicating samples.
CTo speed up training by skipping some layers entirely.
DTo normalize the input data before feeding it to the network.
Attempts:
2 left
💡 Hint

Think about how Dropout affects the network's learning to avoid memorizing the training data.

Predict Output
intermediate
1:30remaining
Output shape after applying Dropout layer

Given the following TensorFlow code, what is the shape of output?

TensorFlow
import tensorflow as tf
input_tensor = tf.random.uniform((32, 10))
dropout_layer = tf.keras.layers.Dropout(0.5)
output = dropout_layer(input_tensor, training=True)
output_shape = output.shape
print(output_shape)
A(32, 5)
B(16, 10)
C(32, 10)
D(10, 32)
Attempts:
2 left
💡 Hint

Dropout does not change the shape of the input tensor.

Model Choice
advanced
2:00remaining
Choosing where to place Dropout layers in a neural network

Which option shows the best practice for placing Dropout layers in a feedforward neural network?

AOnly before the input layer.
BAfter each Dense layer except the output layer.
CAfter the output layer.
DOnly after the first Dense layer.
Attempts:
2 left
💡 Hint

Dropout is usually applied between layers to reduce overfitting.

Hyperparameter
advanced
2:00remaining
Effect of increasing dropout rate on model training

What is the most likely effect of increasing the dropout rate from 0.2 to 0.8 during training?

AThe model will overfit more due to less regularization.
BThe model will train faster with better accuracy.
CThe model's input data will be normalized automatically.
DThe model may underfit because too many neurons are dropped.
Attempts:
2 left
💡 Hint

Think about what happens if too many neurons are turned off during training.

🔧 Debug
expert
2:30remaining
Why does this Dropout layer not work as expected during inference?

Consider this TensorFlow code snippet:

import tensorflow as tf
model = tf.keras.Sequential([
  tf.keras.layers.Dense(64, activation='relu'),
  tf.keras.layers.Dropout(0.5),
  tf.keras.layers.Dense(10)
])

output_train = model(tf.random.uniform((1, 20)), training=True)
output_infer = model(tf.random.uniform((1, 20)), training=False)
print(output_train == output_infer)

Why might output_train and output_infer differ?

ADropout is only active during training, so outputs differ when training=False.
BDropout randomly changes weights permanently, causing different outputs.
CThe model is missing an activation function after the Dropout layer.
DDropout layers cause outputs to be identical regardless of training mode.
Attempts:
2 left
💡 Hint

Recall when Dropout is applied during model use.