0
0
ML Pythonml~20 mins

Probability calibration in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Probability Calibration Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
What does probability calibration mean in machine learning?
Imagine you have a model that predicts the chance of rain tomorrow. What does it mean if the model is well calibrated?
AThe model has the highest accuracy among all models.
BThe model always predicts rain when it actually rains.
CThe model predictions are always either 0 or 1.
DThe predicted probabilities match the actual frequencies of rain over many days.
Attempts:
2 left
💡 Hint
Think about how predicted chances relate to what really happens over time.
Predict Output
intermediate
2:00remaining
Output of calibration curve code snippet
What output does this code produce?
ML Python
from sklearn.calibration import calibration_curve
import numpy as np

# True labels
y_true = np.array([0, 0, 1, 1, 1, 0, 1, 0, 1, 0])
# Predicted probabilities
y_prob = np.array([0.1, 0.4, 0.35, 0.8, 0.9, 0.2, 0.7, 0.3, 0.85, 0.05])

prob_true, prob_pred = calibration_curve(y_true, y_prob, n_bins=3)
print(np.round(prob_true, 2))
A[0. 0.5 1. ]
B[0.33 0.67 0.75]
C[0.2 0.6 0.8]
D[0.25 0.67 1. ]
Attempts:
2 left
💡 Hint
Check how the predicted probabilities are grouped into bins and how many true positives are in each bin.
Model Choice
advanced
2:00remaining
Best calibration method for a binary classifier with overconfident predictions
You have a binary classifier whose predicted probabilities are often too close to 0 or 1, making it overconfident. Which calibration method is most suitable to fix this?
AIsotonic Regression (a non-parametric calibration method)
BPlatt Scaling (logistic regression on the model's scores)
CNo calibration needed, just retrain the model
DUse a decision tree classifier instead
Attempts:
2 left
💡 Hint
Consider which method can handle non-linear calibration well.
Metrics
advanced
1:30remaining
Which metric directly measures calibration quality?
Among these metrics, which one directly measures how well predicted probabilities match observed outcomes?
AAccuracy
BBrier Score
CPrecision
DRecall
Attempts:
2 left
💡 Hint
Think about a metric that compares predicted probabilities to actual binary outcomes.
🔧 Debug
expert
2:30remaining
Why does this calibration code raise an error?
What error does this code raise and why?
ML Python
from sklearn.calibration import CalibratedClassifierCV
from sklearn.linear_model import LogisticRegression
import numpy as np

X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
y = np.array([0, 1, 0, 1])

model = LogisticRegression()
calibrated = CalibratedClassifierCV(model, method='sigmoid')
calibrated.fit(X, y)

# Predict probabilities
probs = calibrated.predict_proba(X)
AValueError: 'base_estimator' should be fitted before calibration
BNotFittedError: This CalibratedClassifierCV instance is not fitted yet
CNo error, code runs successfully
DTypeError: method='sigmoid' is invalid
Attempts:
2 left
💡 Hint
Check the standard usage of CalibratedClassifierCV with an unfitted base estimator.