0
0
TensorFlowml~20 mins

Why thorough evaluation ensures reliability in TensorFlow - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Evaluation Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why is it important to evaluate a model on unseen data?

Imagine you trained a model to recognize cats and dogs. Why should you test it on new pictures it hasn't seen before?

ATo increase the number of training examples
BTo make the training process faster
CTo reduce the size of the model
DTo check if the model learned patterns that work only on training data
Attempts:
2 left
💡 Hint

Think about whether the model can guess right on new, different pictures.

Metrics
intermediate
2:00remaining
Which metric best shows model reliability on imbalanced data?

You have a model that detects rare diseases. Most people are healthy, so the data is imbalanced. Which metric helps you understand if the model is reliable?

AAccuracy
BPrecision
CF1 Score
DRecall
Attempts:
2 left
💡 Hint

Think about a metric that balances both false alarms and missed cases.

Predict Output
advanced
2:00remaining
What is the output of this TensorFlow evaluation code?

Consider this TensorFlow code that evaluates a model on test data and prints accuracy. What will it print?

TensorFlow
import tensorflow as tf
import numpy as np

# Dummy test data
x_test = np.random.random((10, 5))
y_test = np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0])

# Simple model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(5,))
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Fake training to set weights
model.fit(x_test, y_test, epochs=1, verbose=0)

# Evaluate
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f"Accuracy: {accuracy:.2f}")
AAccuracy: 0.50
BAccuracy: 1.00
CAccuracy: 0.70
DAccuracy: 0.00
Attempts:
2 left
💡 Hint

The model is very simple and trained only briefly on small data.

Hyperparameter
advanced
2:00remaining
How does increasing validation frequency affect model evaluation?

During training, you set the model to validate more often on the validation set. What is the main effect of this?

AIt provides more frequent feedback on model performance but slows training
BIt increases the training speed by skipping some batches
CIt reduces the model size by pruning layers
DIt guarantees the model will not overfit
Attempts:
2 left
💡 Hint

Think about what happens when you check the model's accuracy more often during training.

🔧 Debug
expert
2:00remaining
Why does this TensorFlow evaluation code raise an error?

Look at this code snippet that tries to evaluate a model. Why does it raise an error?

TensorFlow
import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(3,))
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Missing training step
x_test = tf.random.normal((5, 3))
y_test = tf.constant([1, 0, 1, 0, 1])

loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f"Accuracy: {accuracy:.2f}")
ARuntimeError because input shape does not match
BNo error, prints accuracy 0.00
CTypeError due to wrong data types in x_test
DValueError because the model is not trained before evaluation
Attempts:
2 left
💡 Hint

Think about what happens if you evaluate a model before training it.