Challenge - 5 Problems
Fine-tuning Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Model Choice
intermediate2:00remaining
Choosing the right layer to fine-tune
You have a pre-trained convolutional neural network for image classification. You want to fine-tune it on a new dataset with similar images but different classes. Which layer is best to fine-tune to adapt the model effectively?
Attempts:
2 left
💡 Hint
Think about which layers capture general features versus task-specific features.
✗ Incorrect
The first layers learn general features like edges, which are useful across tasks. The last layers learn task-specific features. Fine-tuning the last few convolutional layers and fully connected layers allows adapting to new classes while keeping general features intact.
❓ Hyperparameter
intermediate2:00remaining
Selecting learning rate for fine-tuning
When fine-tuning a pre-trained model, which learning rate setting is generally recommended to avoid destroying the learned features?
Attempts:
2 left
💡 Hint
Consider how big changes to weights affect pre-trained knowledge.
✗ Incorrect
A smaller learning rate helps make gentle updates to the pre-trained weights, preserving useful features while adapting to new data. A high learning rate can overwrite learned features too quickly.
❓ Predict Output
advanced2:00remaining
Output of fine-tuning code snippet
What will be the printed output after running this TensorFlow fine-tuning code snippet?
TensorFlow
import tensorflow as tf from tensorflow.keras.applications import MobileNetV2 base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(96,96,3)) base_model.trainable = False model = tf.keras.Sequential([ base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(f"Trainable layers before fine-tuning: {sum([layer.trainable for layer in base_model.layers])}") base_model.trainable = True print(f"Trainable layers after setting trainable=True: {sum([layer.trainable for layer in base_model.layers])}")
Attempts:
2 left
💡 Hint
Check the trainable attribute before and after changing it.
✗ Incorrect
Initially, base_model.trainable is set to False, so no layers are trainable (0). After setting base_model.trainable = True, all layers become trainable. MobileNetV2 has 154 layers, so the count is 154.
❓ Metrics
advanced2:00remaining
Interpreting fine-tuning training metrics
You fine-tune a pre-trained model on a small dataset. After 10 epochs, training accuracy is 98% but validation accuracy is 70%. What does this indicate?
Attempts:
2 left
💡 Hint
Compare training and validation accuracy to assess model behavior.
✗ Incorrect
High training accuracy but much lower validation accuracy means the model learned the training data too well but does not generalize to new data, which is overfitting.
🔧 Debug
expert3:00remaining
Debugging fine-tuning freezing layers issue
You want to fine-tune only the last 10 layers of a pre-trained model in TensorFlow. You set base_model.trainable = True and then set the first layers' trainable attribute to False in a loop. However, after training, you notice all layers are still being updated. What is the likely cause?
TensorFlow
base_model.trainable = True for layer in base_model.layers[:-10]: layer.trainable = False model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(train_data, epochs=5)
Attempts:
2 left
💡 Hint
Think about when TensorFlow reads layer trainable flags.
✗ Incorrect
In TensorFlow, changing layer trainable flags after compiling the model has no effect until you recompile. The optimizer and trainable variables are fixed at compile time.