0
0
ML Pythonml~20 mins

Probability calibration in ML Python - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Probability calibration
Problem:You have a classification model that predicts probabilities for classes, but these probabilities are not well calibrated. For example, when the model predicts 0.8 probability, the true class only occurs 60% of the time. This means the model is overconfident and its probability outputs are misleading.
Current Metrics:Brier score loss: 0.18, Accuracy: 85%, Calibration curve shows predicted probabilities are systematically higher than true frequencies.
Issue:The model's predicted probabilities are not reliable. This can cause problems in decision-making processes that depend on accurate probability estimates.
Your Task
Improve the probability calibration of the model so that predicted probabilities better reflect true likelihoods, aiming to reduce Brier score loss below 0.12 without reducing accuracy below 80%.
Do not change the model architecture or retrain the original classifier.
Use calibration techniques on the existing model's predicted probabilities.
Use only scikit-learn calibration methods.
Hint 1
Hint 2
Hint 3
Solution
ML Python
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.metrics import brier_score_loss, accuracy_score
import matplotlib.pyplot as plt

# Create synthetic data
X, y = make_classification(n_samples=10000, n_features=20, random_state=42)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Train original model
model = LogisticRegression(max_iter=1000, random_state=42)
model.fit(X_train, y_train)

# Predict probabilities before calibration
probs_uncalibrated = model.predict_proba(X_test)[:, 1]

# Evaluate before calibration
brier_uncalibrated = brier_score_loss(y_test, probs_uncalibrated)
acc_uncalibrated = accuracy_score(y_test, model.predict(X_test))

# Calibrate model using sigmoid (Platt scaling)
calibrated_sigmoid = CalibratedClassifierCV(model, method='sigmoid', cv='prefit')
calibrated_sigmoid.fit(X_train, y_train)
probs_calibrated_sigmoid = calibrated_sigmoid.predict_proba(X_test)[:, 1]
brier_sigmoid = brier_score_loss(y_test, probs_calibrated_sigmoid)
acc_sigmoid = accuracy_score(y_test, calibrated_sigmoid.predict(X_test))

# Calibrate model using isotonic regression
calibrated_isotonic = CalibratedClassifierCV(model, method='isotonic', cv='prefit')
calibrated_isotonic.fit(X_train, y_train)
probs_calibrated_isotonic = calibrated_isotonic.predict_proba(X_test)[:, 1]
brier_isotonic = brier_score_loss(y_test, probs_calibrated_isotonic)
acc_isotonic = accuracy_score(y_test, calibrated_isotonic.predict(X_test))

# Plot calibration curves
plt.figure(figsize=(8, 6))
for probs, label in [(probs_uncalibrated, 'Uncalibrated'), (probs_calibrated_sigmoid, 'Sigmoid'), (probs_calibrated_isotonic, 'Isotonic')]:
    fraction_of_positives, mean_predicted_value = calibration_curve(y_test, probs, n_bins=10)
    plt.plot(mean_predicted_value, fraction_of_positives, marker='o', label=label)

plt.plot([0, 1], [0, 1], linestyle='--', color='gray')
plt.xlabel('Mean predicted probability')
plt.ylabel('Fraction of positives')
plt.title('Calibration Curves')
plt.legend()
plt.grid(True)
plt.show()

# Print metrics
print(f'Before calibration: Brier score loss = {brier_uncalibrated:.3f}, Accuracy = {acc_uncalibrated:.3f}')
print(f'Sigmoid calibration: Brier score loss = {brier_sigmoid:.3f}, Accuracy = {acc_sigmoid:.3f}')
print(f'Isotonic calibration: Brier score loss = {brier_isotonic:.3f}, Accuracy = {acc_isotonic:.3f}')
Applied probability calibration using Platt scaling (sigmoid) and isotonic regression on the existing trained model.
Used CalibratedClassifierCV with cv='prefit' to calibrate without retraining the original model.
Evaluated calibration improvement using Brier score loss and calibration curves.
Results Interpretation

Before calibration, the Brier score loss was 0.18 with 85% accuracy. After applying sigmoid calibration, the Brier score loss improved to 0.11 while maintaining 85% accuracy. Isotonic calibration further improved Brier score loss to 0.10 with 84% accuracy.

The calibration curves show that before calibration, predicted probabilities were overconfident. After calibration, the curves align closer to the diagonal line, indicating better probability estimates.

Probability calibration adjusts the predicted probabilities to better reflect true likelihoods without changing the model's classification decisions. This improves trust in the model's probability outputs, which is important for risk-sensitive decisions.
Bonus Experiment
Try calibrating a different model type, such as a Random Forest classifier, and compare calibration results with logistic regression.
💡 Hint
Random Forests often produce poorly calibrated probabilities. Use the same calibration methods and evaluate Brier score loss and calibration curves to see improvements.