0
0
TensorFlowml~20 mins

Classification reports in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Classification reports
Problem:You have trained a neural network to classify images into 3 categories. The model shows good accuracy but you want to understand how well it performs on each class.
Current Metrics:Overall accuracy: 85%, no detailed class-wise metrics available.
Issue:The model's overall accuracy is known but you lack detailed insights like precision, recall, and F1-score per class to evaluate performance properly.
Your Task
Generate and interpret a classification report showing precision, recall, and F1-score for each class using TensorFlow and sklearn.
Use TensorFlow for model prediction.
Use sklearn's classification_report function for metrics.
Do not retrain the model; use existing predictions.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import numpy as np
from sklearn.metrics import classification_report
import tensorflow as tf

# Simulate loading test data and true labels
num_samples = 100
num_classes = 3
np.random.seed(42)
X_test = np.random.random((num_samples, 20))  # example test features
true_labels = np.random.randint(0, num_classes, size=num_samples)

# Simulate a trained model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(32, activation='relu', input_shape=(20,)),
    tf.keras.layers.Dense(num_classes, activation='softmax')
])

# Normally model would be trained, here we just compile and predict for demonstration
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Predict probabilities
pred_probs = model.predict(X_test)

# Convert probabilities to predicted class labels
pred_labels = np.argmax(pred_probs, axis=1)

# Generate classification report
report = classification_report(true_labels, pred_labels, target_names=[f'Class {i}' for i in range(num_classes)])
print(report)
Added code to predict class probabilities using the TensorFlow model.
Converted predicted probabilities to class labels using argmax.
Used sklearn's classification_report to compute precision, recall, and F1-score per class.
Printed the classification report for detailed evaluation.
Results Interpretation

Before: Only overall accuracy was known (85%).

After: Detailed classification report shows precision, recall, and F1-score for each class, revealing strengths and weaknesses per category.

Classification reports provide detailed insights beyond overall accuracy, helping to understand model performance on each class and guiding improvements.
Bonus Experiment
Try generating a classification report with a confusion matrix heatmap visualization.
💡 Hint
Use sklearn.metrics.confusion_matrix and matplotlib's heatmap to visualize class prediction errors.