0
0
Computer Visionml~20 mins

Why pre-trained models save time in Computer Vision - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Pre-trained Model Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why do pre-trained models reduce training time?

Imagine you want to teach a robot to recognize objects. You can start from scratch or use a robot that already knows some objects. Why does using a pre-trained model save time?

ABecause pre-trained models use less memory, making training quicker.
BBecause pre-trained models use simpler algorithms that run faster.
CBecause pre-trained models skip the training process entirely and only do testing.
DBecause the model already learned useful features, so it needs less data and time to adapt to the new task.
Attempts:
2 left
💡 Hint

Think about how learning basics first helps you learn new things faster.

Predict Output
intermediate
2:00remaining
Output of fine-tuning a pre-trained model

Consider this Python code that loads a pre-trained model and fine-tunes it on a small dataset. What will be the printed output?

Computer Vision
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
import numpy as np

base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(96,96,3))

x = base_model.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(10, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)

for layer in base_model.layers:
    layer.trainable = False

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Dummy data
X_train = np.random.random((5,96,96,3))
y_train = np.eye(10)[np.random.choice(10,5)]

history = model.fit(X_train, y_train, epochs=1, verbose=0)

print(f"Training accuracy: {history.history['accuracy'][0]:.2f}")
ATraining accuracy: 0.20
BTraining accuracy: 0.00
CTraining accuracy: 1.00
DTraining accuracy: 0.50
Attempts:
2 left
💡 Hint

Think about training on random data with frozen base layers.

Model Choice
advanced
2:00remaining
Choosing a pre-trained model for fast training

You want to train a model quickly on a small image dataset. Which pre-trained model choice will save the most training time?

AA large model like ResNet152 with many layers and parameters.
BA small model like MobileNetV2 designed for efficiency.
CA model trained from scratch with random weights.
DA model with no pre-trained weights but fewer layers.
Attempts:
2 left
💡 Hint

Think about model size and pre-training impact on training speed.

Hyperparameter
advanced
2:00remaining
Effect of freezing layers on training time

When using a pre-trained model, how does freezing more layers affect training time?

AFreezing layers has no effect on training time.
BFreezing more layers increases training time because the model becomes unstable.
CFreezing more layers reduces training time because fewer parameters are updated.
DFreezing layers causes the model to train slower due to extra computations.
Attempts:
2 left
💡 Hint

Think about how many parts of the model need to learn during training.

Metrics
expert
2:00remaining
Interpreting training speed and accuracy with pre-trained models

You fine-tune two pre-trained models on the same small dataset. Model A trains faster but has lower accuracy. Model B trains slower but achieves higher accuracy. What is the best explanation?

AModel A is smaller and less complex, so it trains faster but may not capture all features, leading to lower accuracy.
BModel B uses random weights, so it trains slower but learns better.
CModel B has fewer layers, so it trains slower but is more accurate.
DModel A uses a worse optimizer causing faster training but lower accuracy.
Attempts:
2 left
💡 Hint

Consider model size, complexity, and training speed trade-offs.