0
0
ML Pythonml~10 mins

Retraining strategies in ML Python - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to retrain a model using new data.

ML Python
model.fit(new_data, new_labels, epochs=[1])
Drag options to blanks, or click blank then click option'
A5
B-1
C0
D10
Attempts:
3 left
💡 Hint
Common Mistakes
Using 0 epochs means no training happens.
Negative epochs cause errors.
2fill in blank
medium

Complete the code to freeze all layers except the last one before retraining.

ML Python
for layer in model.layers[:-1]:
    layer.trainable = [1]
Drag options to blanks, or click blank then click option'
AFalse
BTrue
CNone
D0
Attempts:
3 left
💡 Hint
Common Mistakes
Setting trainable to True retrains all layers.
Using None or 0 is invalid for trainable property.
3fill in blank
hard

Fix the error in the code to compile the model with a suitable optimizer for retraining.

ML Python
model.compile(optimizer=[1], loss='categorical_crossentropy', metrics=['accuracy'])
Drag options to blanks, or click blank then click option'
A'sgd'
B'adagrad'
C'adam'
D'rmsprop'
Attempts:
3 left
💡 Hint
Common Mistakes
Using unsupported optimizer names causes errors.
Choosing SGD may require tuning learning rate.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that filters new data samples with label 1 for retraining.

ML Python
filtered_data = {i: x for i, x in enumerate(new_data) if new_labels[i] [1] [2]
Drag options to blanks, or click blank then click option'
A==
B!=
C1
D0
Attempts:
3 left
💡 Hint
Common Mistakes
Using '!=' filters out the desired class.
Using 0 instead of 1 filters the wrong class.
5fill in blank
hard

Fill all three blanks to update the learning rate and retrain the model with early stopping.

ML Python
from tensorflow.keras.callbacks import EarlyStopping

optimizer = tf.keras.optimizers.Adam(learning_rate=[1])
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
early_stop = EarlyStopping(monitor='val_loss', patience=[2], restore_best_weights=[3])
model.fit(train_data, train_labels, epochs=20, validation_split=0.2, callbacks=[early_stop])
Drag options to blanks, or click blank then click option'
A0.001
B3
CTrue
D0.1
Attempts:
3 left
💡 Hint
Common Mistakes
Using too high learning rate causes training instability.
Setting patience too low stops training too early.
Not restoring best weights may keep worse model.