0
0
TensorFlowml~10 mins

Why transfer learning saves time and data in TensorFlow - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load a pre-trained model in TensorFlow.

TensorFlow
base_model = tf.keras.applications.MobileNetV2(weights=[1], include_top=False)
Drag options to blanks, or click blank then click option'
ANone
B'random'
C'imagenet'
D'cifar10'
Attempts:
3 left
💡 Hint
Common Mistakes
Using None means no pre-trained weights, so transfer learning benefits are lost.
Using dataset names like 'cifar10' is not valid for this parameter.
2fill in blank
medium

Complete the code to freeze the base model layers so they are not trained.

TensorFlow
base_model.trainable = [1]
Drag options to blanks, or click blank then click option'
AFalse
BTrue
CNone
D0
Attempts:
3 left
💡 Hint
Common Mistakes
Setting trainable to True will retrain all layers, losing transfer learning speed benefits.
3fill in blank
hard

Fix the error in the code to add a new classification head after the base model.

TensorFlow
model = tf.keras.Sequential([
    base_model,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense([1], activation='softmax')
])
Drag options to blanks, or click blank then click option'
A1
B10
C'10'
DNone
Attempts:
3 left
💡 Hint
Common Mistakes
Using a string like '10' causes an error.
Using None or 1 will not match the dataset classes.
4fill in blank
hard

Fill both blanks to compile the model with an optimizer and loss suitable for transfer learning.

TensorFlow
model.compile(optimizer=[1], loss=[2], metrics=['accuracy'])
Drag options to blanks, or click blank then click option'
A'adam'
B'sgd'
C'categorical_crossentropy'
D'mse'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'mse' loss is for regression, not classification.
Using 'sgd' optimizer works but is slower to converge than 'adam'.
5fill in blank
hard

Fill all three blanks to create a data pipeline that resizes images, batches them, and prefetches for performance.

TensorFlow
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, [1]), y))
train_ds = train_ds.batch([2])
train_ds = train_ds.prefetch(buffer_size=[3])
Drag options to blanks, or click blank then click option'
A(224, 224)
B32
Ctf.data.AUTOTUNE
D(128, 128)
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong image size causes shape errors.
Batch size too large or too small affects training speed.
Not using prefetch or wrong buffer size slows data loading.