0
0
TensorFlowml~20 mins

Fine-tuning approach in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fine-tuning Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Model Choice
intermediate
2:00remaining
Choosing the right layer to fine-tune
You have a pre-trained convolutional neural network for image classification. You want to fine-tune it on a new dataset with similar images but different classes. Which layer is best to fine-tune to adapt the model effectively?
AThe last few convolutional layers and the fully connected layers
BAll layers including the input layer
COnly the first convolutional layer
DOnly the last fully connected (dense) layer
Attempts:
2 left
💡 Hint
Think about which layers capture general features versus task-specific features.
Hyperparameter
intermediate
2:00remaining
Selecting learning rate for fine-tuning
When fine-tuning a pre-trained model, which learning rate setting is generally recommended to avoid destroying the learned features?
AUse a very high learning rate like 0.1
BUse no learning rate, keep weights frozen
CUse a smaller learning rate than training from scratch, like 0.0001
DUse the same learning rate as training from scratch
Attempts:
2 left
💡 Hint
Consider how big changes to weights affect pre-trained knowledge.
Predict Output
advanced
2:00remaining
Output of fine-tuning code snippet
What will be the printed output after running this TensorFlow fine-tuning code snippet?
TensorFlow
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2

base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(96,96,3))
base_model.trainable = False

model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

print(f"Trainable layers before fine-tuning: {sum([layer.trainable for layer in base_model.layers])}")

base_model.trainable = True

print(f"Trainable layers after setting trainable=True: {sum([layer.trainable for layer in base_model.layers])}")
A
Trainable layers before fine-tuning: 0
Trainable layers after setting trainable=True: 0
B
Trainable layers before fine-tuning: 0
Trainable layers after setting trainable=True: 154
C
Trainable layers before fine-tuning: 154
Trainable layers after setting trainable=True: 0
D
Trainable layers before fine-tuning: 154
Trainable layers after setting trainable=True: 154
Attempts:
2 left
💡 Hint
Check the trainable attribute before and after changing it.
Metrics
advanced
2:00remaining
Interpreting fine-tuning training metrics
You fine-tune a pre-trained model on a small dataset. After 10 epochs, training accuracy is 98% but validation accuracy is 70%. What does this indicate?
AThe model has a bug in the code
BThe model is perfectly generalized
CThe model is underfitting the training data
DThe model is overfitting the training data
Attempts:
2 left
💡 Hint
Compare training and validation accuracy to assess model behavior.
🔧 Debug
expert
3:00remaining
Debugging fine-tuning freezing layers issue
You want to fine-tune only the last 10 layers of a pre-trained model in TensorFlow. You set base_model.trainable = True and then set the first layers' trainable attribute to False in a loop. However, after training, you notice all layers are still being updated. What is the likely cause?
TensorFlow
base_model.trainable = True
for layer in base_model.layers[:-10]:
    layer.trainable = False

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(train_data, epochs=5)
AYou must compile the model after changing layer trainable attributes
BSetting base_model.trainable = True overrides individual layer settings
CYou need to set model.trainable = False instead
DThe optimizer must be reset to apply layer freezing
Attempts:
2 left
💡 Hint
Think about when TensorFlow reads layer trainable flags.