Complete the code to retrain a model using new data.
model.fit(new_data, new_labels, epochs=[1])Setting epochs to 10 allows the model to train for 10 cycles on the new data, which is a common choice for retraining.
Complete the code to freeze all layers except the last one before retraining.
for layer in model.layers[:-1]: layer.trainable = [1]
Setting trainable to False freezes the layers so they are not updated during retraining.
Fix the error in the code to compile the model with a suitable optimizer for retraining.
model.compile(optimizer=[1], loss='categorical_crossentropy', metrics=['accuracy'])
Adam optimizer is widely used for retraining because it adapts learning rates and converges well.
Fill both blanks to create a dictionary comprehension that filters new data samples with label 1 for retraining.
filtered_data = {i: x for i, x in enumerate(new_data) if new_labels[i] [1] [2]The comprehension keeps samples where the label equals 1, which is useful to retrain on a specific class.
Fill all three blanks to update the learning rate and retrain the model with early stopping.
from tensorflow.keras.callbacks import EarlyStopping optimizer = tf.keras.optimizers.Adam(learning_rate=[1]) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) early_stop = EarlyStopping(monitor='val_loss', patience=[2], restore_best_weights=[3]) model.fit(train_data, train_labels, epochs=20, validation_split=0.2, callbacks=[early_stop])
Learning rate 0.001 is common for Adam optimizer. Patience 3 waits 3 epochs without improvement before stopping. Restoring best weights ensures best model is kept.