0
0
TensorFlowml~20 mins

Classification reports in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Classification Report Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of classification report with imbalanced classes
Given a binary classification with true labels and predicted labels as below, what is the precision for class 1 in the classification report?
TensorFlow
from sklearn.metrics import classification_report

y_true = [0, 0, 0, 1, 1, 1, 1]
y_pred = [0, 0, 1, 1, 0, 1, 1]

report = classification_report(y_true, y_pred, output_dict=True)
precision_class_1 = report['1']['precision']
print(round(precision_class_1, 2))
A0.60
B0.67
C0.80
D0.75
Attempts:
2 left
💡 Hint
Precision is the ratio of true positives to all predicted positives for that class.
Model Choice
intermediate
1:30remaining
Choosing the right metric for multi-class classification
You have a multi-class classification problem with 5 classes and want to evaluate your TensorFlow model. Which metric from the classification report best shows the balance between precision and recall for each class?
AAccuracy
BF1-score
CPrecision
DRecall
Attempts:
2 left
💡 Hint
Think about a metric that combines precision and recall.
Hyperparameter
advanced
2:00remaining
Effect of threshold on classification report metrics
In a binary classification TensorFlow model, you adjust the decision threshold from 0.5 to 0.7. How does this change most likely affect the precision and recall for the positive class in the classification report?
APrecision increases, recall decreases
BPrecision decreases, recall increases
CBoth precision and recall increase
DBoth precision and recall decrease
Attempts:
2 left
💡 Hint
Raising the threshold makes the model more strict about positive predictions.
🔧 Debug
advanced
2:00remaining
Debugging unexpected classification report output
You run classification_report on your TensorFlow model predictions but see all zeros for precision, recall, and f1-score for one class. What is the most likely cause?
TensorFlow
from sklearn.metrics import classification_report

true_labels = [0, 1, 2, 2, 1]
pred_labels = [0, 0, 2, 2, 0]

print(classification_report(true_labels, pred_labels))
AThe model never predicted that class, causing zero division in metrics
BThe true labels are missing that class, so metrics are zero
CThe classification_report function is not compatible with TensorFlow
DThe labels are not integers, causing metric calculation errors
Attempts:
2 left
💡 Hint
Check if the model predicted any samples for that class.
🧠 Conceptual
expert
2:30remaining
Interpreting macro vs weighted averages in classification reports
In a multi-class classification report, what is the key difference between macro average and weighted average for precision, recall, and F1-score?
ABoth macro and weighted averages weight metrics by class frequency
BMacro average weights metrics by class support; weighted average averages equally
CMacro average calculates metrics per class and averages equally; weighted average weights by class support
DMacro average only considers the largest class; weighted average considers all classes
Attempts:
2 left
💡 Hint
Think about how class size affects the averaging.