Challenge - 5 Problems
Dropout Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of Dropout layer during training
What is the output of the following PyTorch code snippet when the model is in training mode?
PyTorch
import torch import torch.nn as nn torch.manual_seed(0) dropout = nn.Dropout(p=0.5) input_tensor = torch.tensor([1.0, 2.0, 3.0, 4.0]) output = dropout(input_tensor) print(output)
Attempts:
2 left
💡 Hint
Remember that dropout randomly zeroes some elements and scales the others by 1/(1-p) during training.
✗ Incorrect
During training, nn.Dropout with p=0.5 randomly sets half of the input elements to zero and scales the remaining elements by 1/(1-0.5) = 2. Given the fixed seed, the output tensor is [2., 0., 6., 8.].
🧠 Conceptual
intermediate1:30remaining
Effect of Dropout during evaluation
What happens to the output of nn.Dropout when the model is switched to evaluation mode (model.eval())?
Attempts:
2 left
💡 Hint
Think about why dropout is only used during training.
✗ Incorrect
During evaluation, dropout is disabled and acts as an identity function, passing inputs unchanged to ensure consistent predictions.
❓ Hyperparameter
advanced1:30remaining
Choosing dropout probability
Which dropout probability (p) value is generally recommended to prevent overfitting without causing underfitting in a neural network?
Attempts:
2 left
💡 Hint
Typical dropout values are between 0.2 and 0.5.
✗ Incorrect
A dropout probability around 0.5 is commonly used to balance regularization and model capacity, reducing overfitting without hurting learning too much.
🔧 Debug
advanced2:00remaining
Identifying error in dropout usage
What error will occur when running this code snippet?
PyTorch
import torch import torch.nn as nn dropout = nn.Dropout(p=1.2) input_tensor = torch.tensor([1.0, 2.0, 3.0]) output = dropout(input_tensor) print(output)
Attempts:
2 left
💡 Hint
Check the valid range for dropout probability p.
✗ Incorrect
Dropout probability p must be between 0 and 1 inclusive. Setting p=1.2 causes a runtime error.
❓ Model Choice
expert2:30remaining
Best place to apply dropout in a neural network
Where is the best place to apply nn.Dropout in a feedforward neural network to improve generalization?
Attempts:
2 left
💡 Hint
Dropout is usually applied to internal layers to prevent co-adaptation of neurons.
✗ Incorrect
Applying dropout between hidden layers after activations helps regularize the network by randomly dropping neurons during training.