0
0
TensorFlowml~10 mins

Fine-tuning approach in TensorFlow - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load a pre-trained model for fine-tuning.

TensorFlow
base_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3), include_top=False, weights=[1])
Drag options to blanks, or click blank then click option'
ANone
B'random'
C'imagenet'
D'cifar10'
Attempts:
3 left
💡 Hint
Common Mistakes
Using None will not load pre-trained weights.
Using 'random' or 'cifar10' are invalid for this argument.
2fill in blank
medium

Complete the code to freeze the base model layers during fine-tuning.

TensorFlow
base_model.[1] = False
Drag options to blanks, or click blank then click option'
Arequires_grad
Btrainable
Cfrozen
Dfrozen_layers
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'frozen' or 'frozen_layers' are not valid attributes.
Using 'requires_grad' is PyTorch syntax.
3fill in blank
hard

Fix the error in the code to add a global average pooling layer after the base model.

TensorFlow
x = base_model.output
x = tf.keras.layers.[1]()(x)
Drag options to blanks, or click blank then click option'
AGlobalAveragePooling2D
BMaxPooling2D
CAveragePooling2D
DGlobalMaxPooling2D
Attempts:
3 left
💡 Hint
Common Mistakes
Using MaxPooling layers changes the pooling method.
AveragePooling2D is not global and requires pool size.
4fill in blank
hard

Fill both blanks to compile the fine-tuned model with an optimizer and loss function.

TensorFlow
model.compile(optimizer=tf.keras.optimizers.[1](learning_rate=0.0001), loss='[2]', metrics=['accuracy'])
Drag options to blanks, or click blank then click option'
AAdam
BSGD
Ccategorical_crossentropy
Dbinary_crossentropy
Attempts:
3 left
💡 Hint
Common Mistakes
Using SGD optimizer may require different learning rate tuning.
Using 'binary_crossentropy' is for two-class problems.
5fill in blank
hard

Fill all three blanks to create a dictionary of training callbacks for early stopping and saving the best model.

TensorFlow
callbacks = [
    tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=[1]),
    tf.keras.callbacks.ModelCheckpoint(filepath='best_model.h5', save_best_only=[2], monitor='[3]')
]
Drag options to blanks, or click blank then click option'
A3
BTrue
Cval_loss
DFalse
Attempts:
3 left
💡 Hint
Common Mistakes
Setting patience too high or zero.
Using False for save_best_only saves all models.
Monitoring training loss instead of validation loss.