0
0
MlopsHow-ToBeginner · 3 min read

How to Use Classification Report in sklearn with Python

Use classification_report from sklearn.metrics to get precision, recall, f1-score, and support for each class in your classification model. Pass true labels and predicted labels as arguments to generate a detailed text summary of your model's performance.
📐

Syntax

The classification_report function has this basic syntax:

  • y_true: The true labels from your dataset.
  • y_pred: The labels predicted by your model.
  • target_names (optional): List of class names to display instead of numeric labels.
  • output_dict (optional): If True, returns results as a dictionary instead of a string.
python
from sklearn.metrics import classification_report

classification_report(y_true, y_pred, target_names=None, output_dict=False)
💻

Example

This example shows how to use classification_report with a simple classification task using sklearn's train_test_split and LogisticRegression. It prints the report showing precision, recall, f1-score, and support for each class.

python
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report

# Load data
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=42)

# Train model
model = LogisticRegression(max_iter=200)
model.fit(X_train, y_train)

# Predict
y_pred = model.predict(X_test)

# Generate classification report
report = classification_report(y_test, y_pred, target_names=iris.target_names)
print(report)
Output
precision recall f1-score support setosa 1.00 1.00 1.00 13 versicolor 0.92 0.92 0.92 13 virginica 0.92 0.92 0.92 13 accuracy 0.95 39 macro avg 0.95 0.95 0.95 39 weighted avg 0.95 0.95 0.95 39
⚠️

Common Pitfalls

Common mistakes when using classification_report:

  • Passing predicted probabilities instead of predicted class labels to y_pred.
  • Not matching the order or number of target_names to the classes in y_true and y_pred.
  • Ignoring multi-class vs binary classification differences.
  • Not using output_dict=True when you want to programmatically access metrics.

Always ensure y_pred contains class labels, not probabilities.

python
from sklearn.metrics import classification_report

# Wrong: passing probabilities instead of labels
# y_pred_prob = model.predict_proba(X_test)
# print(classification_report(y_test, y_pred_prob))  # This will error or give wrong results

# Right: pass predicted labels
print(classification_report(y_test, y_pred))
📊

Quick Reference

Classification Report Metrics Explained:

  • Precision: How many selected items are relevant (true positives / predicted positives).
  • Recall: How many relevant items are selected (true positives / actual positives).
  • F1-score: Harmonic mean of precision and recall, balances both.
  • Support: Number of true instances for each class.
  • Accuracy: Overall correct predictions / total predictions.

Key Takeaways

Use sklearn.metrics.classification_report to get detailed classification metrics easily.
Pass true labels and predicted class labels, not probabilities, to the function.
Use target_names to show readable class names in the report.
Set output_dict=True to get metrics as a dictionary for further processing.
Check precision, recall, f1-score, and support to understand model performance per class.