0
0
TensorFlowml~20 mins

Compiling models (optimizer, loss, metrics) in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Model Compilation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of model training metrics
Consider the following TensorFlow model compilation and training code snippet. What will be the printed output for the training accuracy after the first epoch?
TensorFlow
import tensorflow as tf
import numpy as np

model = tf.keras.Sequential([
    tf.keras.layers.Dense(1, input_shape=(2,), activation='sigmoid')
])

model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])

# Dummy data
x_train = np.array([[0,0],[0,1],[1,0],[1,1]])
y_train = np.array([0,1,1,0])

history = model.fit(x_train, y_train, epochs=1, verbose=0)
print(f"Training accuracy: {history.history['accuracy'][0]:.2f}")
ATraining accuracy: 0.50
BTraining accuracy: 1.00
CTraining accuracy: 0.25
DTraining accuracy: 0.00
Attempts:
2 left
💡 Hint
Think about the model's initial random weights and the simple dataset.
Model Choice
intermediate
1:30remaining
Choosing the correct loss function for multi-class classification
You want to compile a TensorFlow model to classify images into 5 categories. The labels are one-hot encoded vectors. Which loss function should you choose?
Atf.keras.losses.SparseCategoricalCrossentropy()
Btf.keras.losses.CategoricalCrossentropy()
Ctf.keras.losses.BinaryCrossentropy()
Dtf.keras.losses.MeanSquaredError()
Attempts:
2 left
💡 Hint
One-hot encoded labels require a specific crossentropy loss.
Hyperparameter
advanced
1:30remaining
Effect of optimizer choice on training speed
You compile two identical models with the same architecture and loss but different optimizers: Adam and SGD. Which statement about their training behavior is generally true?
AAdam adapts learning rates and often converges faster than SGD.
BSGD usually converges faster than Adam on most problems.
CBoth optimizers always produce the same training speed and results.
DAdam requires manual learning rate decay to work properly.
Attempts:
2 left
💡 Hint
Think about how Adam adjusts learning rates automatically.
Metrics
advanced
2:00remaining
Interpreting multiple metrics in model.compile
If you compile a model with metrics=['accuracy', 'precision', 'recall'], what will be the output after training?
AThe metrics will be ignored and only loss will be reported.
BOnly 'accuracy' will be tracked because 'precision' and 'recall' are invalid metric names.
CThe model will raise a ValueError due to invalid metric names.
DThe training history will include keys 'accuracy', 'precision', and 'recall' with their values per epoch.
Attempts:
2 left
💡 Hint
TensorFlow supports standard metrics such as 'precision' and 'recall'.
🔧 Debug
expert
2:30remaining
Identifying the cause of a metric reporting error
You compile a TensorFlow model with metrics=['accuracy'] but during training, you get this error: "ValueError: Shapes (None, 1) and (None, 10) are incompatible". What is the most likely cause?
AThe loss function is missing from model.compile.
BThe optimizer is incompatible with the accuracy metric.
CThe model output shape does not match the shape of the labels provided.
DThe batch size is too large for the model.
Attempts:
2 left
💡 Hint
Check the shapes of model outputs and labels carefully.