Challenge - 5 Problems
Early Stopping Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of Early Stopping Check Function
Given the following early stopping check function, what will be the output after calling it with the provided inputs?
PyTorch
def early_stopping_check(val_loss, best_loss, patience_counter, patience): if val_loss < best_loss: best_loss = val_loss patience_counter = 0 return True, best_loss, patience_counter else: patience_counter += 1 if patience_counter >= patience: return False, best_loss, patience_counter else: return True, best_loss, patience_counter best_loss = 0.5 patience_counter = 2 patience = 3 val_loss = 0.6 result = early_stopping_check(val_loss, best_loss, patience_counter, patience) print(result)
Attempts:
2 left
💡 Hint
Think about what happens when validation loss does not improve and patience counter reaches the limit.
✗ Incorrect
The validation loss (0.6) is not less than the best loss (0.5), so patience_counter increments from 2 to 3. Since patience is 3, the function returns False to stop training.
❓ Model Choice
intermediate1:30remaining
Choosing Early Stopping Patience Value
You are training a neural network and want to use early stopping. Which patience value is most suitable to avoid stopping too early but still prevent overfitting?
Attempts:
2 left
💡 Hint
Consider a balance between giving the model time to improve and avoiding wasting time on no improvement.
✗ Incorrect
A patience of 5 allows the model some room to improve after a bad epoch but stops training before wasting too much time.
🔧 Debug
advanced2:30remaining
Debugging Early Stopping Implementation
The following early stopping code does not stop training even when validation loss stops improving. What is the bug?
PyTorch
class EarlyStopping: def __init__(self, patience=3): self.patience = patience self.counter = 0 self.best_loss = float('inf') self.early_stop = False def __call__(self, val_loss): if val_loss >= self.best_loss: self.counter += 1 if self.counter >= self.patience: self.early_stop = True else: self.best_loss = val_loss self.counter = 0 # Usage example es = EarlyStopping(patience=2) losses = [0.5, 0.4, 0.45, 0.46, 0.47] for loss in losses: es(loss) print(es.early_stop)
Attempts:
2 left
💡 Hint
Think about what happens when val_loss equals best_loss.
✗ Incorrect
If val_loss equals best_loss, the counter does not increment, so early stopping may never trigger. Changing > to >= fixes this.
❓ Metrics
advanced1:30remaining
Interpreting Early Stopping Training Logs
During training with early stopping, the validation loss values per epoch are: [0.6, 0.55, 0.54, 0.54, 0.55, 0.56, 0.57]. If patience is set to 2, at which epoch will training stop?
Attempts:
2 left
💡 Hint
Count how many consecutive epochs the validation loss does not improve.
✗ Incorrect
Validation loss stops improving after epoch 3 (0.54). It stays the same at epoch 4, then worsens at epochs 5 and 6. After 2 bad epochs (5 and 6), training stops.
🧠 Conceptual
expert1:00remaining
Why Use Early Stopping in Model Training?
Which of the following best explains the main reason to use early stopping during training of machine learning models?
Attempts:
2 left
💡 Hint
Think about the difference between training loss and validation loss.
✗ Incorrect
Early stopping monitors validation loss to stop training before the model starts overfitting the training data.