0
0
MlopsHow-ToBeginner · 3 min read

How to Calculate AUC Score with sklearn in Python

Use roc_auc_score from sklearn.metrics to calculate the AUC score in Python. Pass the true labels and predicted probabilities to roc_auc_score(y_true, y_score) to get the AUC value.
📐

Syntax

The function roc_auc_score(y_true, y_score) computes the Area Under the Receiver Operating Characteristic Curve (AUC) from prediction scores.

  • y_true: Array of true binary labels (0 or 1).
  • y_score: Array of predicted scores or probabilities for the positive class.

The output is a float between 0 and 1 representing the AUC score.

python
from sklearn.metrics import roc_auc_score

auc = roc_auc_score(y_true, y_score)
💻

Example

This example shows how to calculate the AUC score for a binary classification problem using sklearn. We create true labels and predicted probabilities, then compute the AUC score.

python
from sklearn.metrics import roc_auc_score

# True binary labels
y_true = [0, 0, 1, 1]

# Predicted probabilities for positive class
y_score = [0.1, 0.4, 0.35, 0.8]

# Calculate AUC score
auc = roc_auc_score(y_true, y_score)
print(f"AUC score: {auc:.2f}")
Output
AUC score: 0.75
⚠️

Common Pitfalls

Common mistakes when calculating AUC score include:

  • Passing predicted class labels instead of probabilities to roc_auc_score. It requires scores or probabilities, not 0/1 predictions.
  • Using labels that are not binary (only 0 and 1 are supported).
  • Confusing roc_auc_score with accuracy or other metrics.

Always provide the predicted probabilities for the positive class to get a meaningful AUC score.

python
from sklearn.metrics import roc_auc_score

# Wrong: passing predicted classes
# y_pred = [0, 0, 1, 1]
# auc_wrong = roc_auc_score(y_true, y_pred)  # This will give incorrect result

# Right: passing predicted probabilities
# y_score = [0.1, 0.4, 0.35, 0.8]
# auc_right = roc_auc_score(y_true, y_score)
📊

Quick Reference

ParameterDescription
y_trueArray of true binary labels (0 or 1)
y_scoreArray of predicted probabilities or scores for positive class
OutputAUC score as float between 0 and 1

Key Takeaways

Use roc_auc_score from sklearn.metrics with true labels and predicted probabilities.
Pass predicted probabilities, not class labels, to get correct AUC score.
AUC score ranges from 0 to 1; higher means better model discrimination.
Ensure true labels are binary (0 or 1) for roc_auc_score to work properly.