0
0
ML Pythonprogramming~5 mins

Confusion matrix in ML Python

Choose your learning style9 modes available
Introduction

A confusion matrix helps us see how well a model is doing by showing where it gets things right and where it makes mistakes.

Checking how well a model classifies emails as spam or not spam.
Evaluating a model that predicts if a patient has a disease or not.
Understanding errors in a model that recognizes handwritten digits.
Comparing different models to pick the best one for a classification task.
Syntax
ML Python
from sklearn.metrics import confusion_matrix

cm = confusion_matrix(true_labels, predicted_labels)

true_labels are the actual correct answers.

predicted_labels are what the model guessed.

Examples
This example shows a simple confusion matrix for two classes: 0 and 1.
ML Python
from sklearn.metrics import confusion_matrix

true_labels = [0, 1, 0, 1]
predicted_labels = [0, 0, 0, 1]
cm = confusion_matrix(true_labels, predicted_labels)
print(cm)
Here, we specify class order with labels for clarity.
ML Python
from sklearn.metrics import confusion_matrix

true_labels = ['cat', 'dog', 'cat', 'dog']
predicted_labels = ['dog', 'dog', 'cat', 'cat']
cm = confusion_matrix(true_labels, predicted_labels, labels=['cat', 'dog'])
print(cm)
Sample Program

This program compares true labels and predicted labels for a binary classification. It prints the confusion matrix showing counts of correct and incorrect predictions.

ML Python
from sklearn.metrics import confusion_matrix

# Actual labels
true_labels = [1, 0, 1, 1, 0, 1, 0, 0, 1, 0]

# Model predictions
predicted_labels = [1, 0, 0, 1, 0, 1, 1, 0, 1, 0]

# Calculate confusion matrix
cm = confusion_matrix(true_labels, predicted_labels)

print("Confusion Matrix:")
print(cm)
OutputSuccess
Important Notes

The confusion matrix rows represent actual classes, and columns represent predicted classes.

Diagonal values show correct predictions; off-diagonal values show mistakes.

It works for any number of classes, not just two.

Summary

A confusion matrix shows how many times a model guessed each class correctly or incorrectly.

It helps understand model errors clearly.

Use it to improve and compare classification models.