0
0
TensorFlowml~20 mins

Transfer learning for small datasets in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Transfer Learning Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Why use transfer learning on small datasets?

Imagine you have a tiny dataset of 100 images to classify. Why is transfer learning a good choice here?

AIt uses a pre-trained model to extract useful features, reducing the need for large data.
BIt trains a new model from scratch to perfectly fit the small dataset.
CIt ignores the small dataset and uses only the pre-trained model's predictions.
DIt increases the dataset size by copying images multiple times.
Attempts:
2 left
💡 Hint

Think about how pre-trained models help when data is limited.

Predict Output
intermediate
2:00remaining
Output shape after freezing base model layers

Given this TensorFlow code snippet, what is the output shape of the model's predictions?

TensorFlow
import tensorflow as tf
base_model = tf.keras.applications.MobileNetV2(input_shape=(128,128,3), include_top=False, weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(5, activation='softmax')
])
output_shape = model.output_shape
A(None, 128, 128, 3)
B(None, 5)
C(None, 7, 7, 1280)
D(None, 1)
Attempts:
2 left
💡 Hint

Check the last Dense layer's output units.

Hyperparameter
advanced
1:30remaining
Best learning rate choice for fine-tuning a pre-trained model

You want to fine-tune a pre-trained model on a small dataset. Which learning rate is most suitable?

A0.1 (high learning rate)
B0.001 (moderate learning rate)
C1.0 (extremely high learning rate)
D0.00001 (very low learning rate)
Attempts:
2 left
💡 Hint

Fine-tuning usually requires careful, small updates.

Metrics
advanced
1:30remaining
Interpreting validation accuracy in transfer learning

After training a transfer learning model on a small dataset, you see training accuracy at 95% but validation accuracy at 60%. What does this indicate?

AThe model is underfitting and needs more training.
BThe validation data is too easy compared to training data.
CThe model is overfitting the training data.
DThe model is perfectly generalized.
Attempts:
2 left
💡 Hint

Think about the difference between training and validation accuracy.

🔧 Debug
expert
2:30remaining
Why does this transfer learning code raise an error?

Consider this TensorFlow code snippet. It raises a ValueError when running. What is the cause?

TensorFlow
import tensorflow as tf
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=(64,64,3))
base_model.trainable = False
model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
import numpy as np
x = np.random.rand(32, 64, 64, 3)
y = np.random.randint(0, 10, size=(32,))
model.fit(x, y, epochs=1)
AInput shape (64,64,3) is too small for ResNet50 base model.
BThe batch size 32 is too large for this model.
CThe base model must be trainable for transfer learning to work.
DLoss function sparse_categorical_crossentropy is incompatible with softmax activation.
Attempts:
2 left
💡 Hint

Check the minimum input size requirements of ResNet50.