0
0
TensorFlowml~5 mins

ROC and AUC curves in TensorFlow

Choose your learning style9 modes available
Introduction

ROC and AUC help us see how well a model can tell apart two groups, like good vs bad. They show how good the model is at making decisions.

When checking how well a model predicts if an email is spam or not.
When deciding if a medical test correctly detects a disease.
When comparing different models to pick the best one for classification.
When you want to understand the trade-off between catching positives and avoiding false alarms.
Syntax
TensorFlow
from sklearn.metrics import roc_curve, auc

fpr, tpr, thresholds = roc_curve(true_labels, predicted_scores)
roc_auc = auc(fpr, tpr)

roc_curve returns false positive rate (fpr), true positive rate (tpr), and thresholds.

auc calculates the area under the ROC curve, a single number to summarize performance.

Examples
Calculate ROC and AUC for a small example with true labels and predicted scores.
TensorFlow
fpr, tpr, thresholds = roc_curve([0, 0, 1, 1], [0.1, 0.4, 0.35, 0.8])
roc_auc = auc(fpr, tpr)
Use TensorFlow's built-in AUC metric to compute AUC during or after training.
TensorFlow
import tensorflow as tf

auc_metric = tf.keras.metrics.AUC()
auc_metric.update_state(true_labels, predicted_scores)
result = auc_metric.result().numpy()
Sample Model

This program shows how to calculate ROC curve and AUC using both sklearn and TensorFlow. It prints the false positive rates, true positive rates, thresholds, and AUC values.

TensorFlow
import numpy as np
import tensorflow as tf
from sklearn.metrics import roc_curve, auc

# Create sample true labels and predicted probabilities
true_labels = np.array([0, 0, 1, 1])
predicted_scores = np.array([0.1, 0.4, 0.35, 0.8])

# Calculate ROC curve using sklearn
fpr, tpr, thresholds = roc_curve(true_labels, predicted_scores)
roc_auc = auc(fpr, tpr)

print(f"False Positive Rates: {fpr}")
print(f"True Positive Rates: {tpr}")
print(f"Thresholds: {thresholds}")
print(f"AUC: {roc_auc:.2f}")

# Calculate AUC using TensorFlow metric
auc_metric = tf.keras.metrics.AUC()
auc_metric.update_state(true_labels, predicted_scores)
tf_auc = auc_metric.result().numpy()
print(f"TensorFlow AUC: {tf_auc:.2f}")
OutputSuccess
Important Notes

The ROC curve plots the true positive rate against the false positive rate at different threshold settings.

AUC ranges from 0 to 1; closer to 1 means better model performance.

Use predicted probabilities or scores, not hard labels, to compute ROC and AUC.

Summary

ROC curve helps visualize model's ability to distinguish classes.

AUC gives a single number to summarize ROC curve performance.

Both sklearn and TensorFlow provide easy ways to compute these metrics.