0
0
TensorFlowml~20 mins

Callbacks (EarlyStopping, ModelCheckpoint) in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Callback Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this EarlyStopping callback configuration?
Consider the following code snippet for training a TensorFlow model with EarlyStopping callback:

early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)
model.fit(x_train, y_train, epochs=20, validation_data=(x_val, y_val), callbacks=[early_stop])

What will happen if the validation loss does not improve for 4 consecutive epochs?
ATraining stops after 3 epochs without improvement, and model weights are restored to the best epoch.
BTraining continues for all 20 epochs regardless of validation loss.
CTraining stops after 4 epochs without improvement, and model weights are restored to the best epoch.
DTraining stops immediately after the first epoch without improvement, and weights are not restored.
Attempts:
2 left
💡 Hint
Patience defines how many epochs to wait after last improvement before stopping.
Model Choice
intermediate
2:00remaining
Which ModelCheckpoint option saves the model only when validation accuracy improves?
You want to save your TensorFlow model only when the validation accuracy improves during training. Which ModelCheckpoint callback configuration achieves this?
Atf.keras.callbacks.ModelCheckpoint('model.h5', monitor='val_accuracy', save_best_only=False, mode='max')
Btf.keras.callbacks.ModelCheckpoint('model.h5', monitor='val_accuracy', save_best_only=True, mode='max')
Ctf.keras.callbacks.ModelCheckpoint('model.h5', monitor='accuracy', save_best_only=False)
Dtf.keras.callbacks.ModelCheckpoint('model.h5', monitor='val_loss', save_best_only=True, mode='min')
Attempts:
2 left
💡 Hint
You want to monitor validation accuracy and save only the best model.
Hyperparameter
advanced
2:00remaining
What is the effect of setting 'restore_best_weights=False' in EarlyStopping?
In the EarlyStopping callback, what happens if you set restore_best_weights=False and training stops due to no improvement?
AThe model weights remain as they were at the last epoch before stopping, not necessarily the best.
BThe model weights are reset to the initial random weights before training.
CThe model weights are restored to the best epoch's weights automatically.
DTraining restarts from the best epoch automatically.
Attempts:
2 left
💡 Hint
Consider what happens to weights if you do not restore best weights.
🔧 Debug
advanced
2:00remaining
Why does this ModelCheckpoint callback not save any model files?
Given this callback:

checkpoint = tf.keras.callbacks.ModelCheckpoint('best_model.h5', monitor='val_accuracy', save_best_only=True, mode='min')

and training where validation accuracy improves, why might no model files be saved?
ABecause monitor='val_accuracy' is not a valid metric name.
BBecause save_best_only=True disables saving any model files.
CBecause mode='min' expects validation accuracy to decrease, so no improvement is detected.
DBecause the file path 'best_model.h5' is invalid and causes silent failure.
Attempts:
2 left
💡 Hint
Think about whether validation accuracy should increase or decrease for improvement.
🧠 Conceptual
expert
2:00remaining
Why combine EarlyStopping with ModelCheckpoint in training?
What is the main advantage of using both EarlyStopping and ModelCheckpoint callbacks together during model training?
ABoth callbacks perform the same function, so using both doubles the saving frequency.
BEarlyStopping saves the model after every epoch, and ModelCheckpoint stops training when performance degrades.
CEarlyStopping increases training epochs, and ModelCheckpoint deletes old models automatically.
DEarlyStopping stops training early to avoid overfitting, while ModelCheckpoint saves the best model weights during training.
Attempts:
2 left
💡 Hint
Think about how each callback helps training and model saving.