Recall & Review
beginner
What is probability calibration in machine learning?
Probability calibration is the process of adjusting a model's predicted probabilities so they better reflect the true likelihood of an event happening. For example, if a model says 70% chance of rain, it should actually rain about 70% of those times.
Click to reveal answer
beginner
Why is probability calibration important?
It helps make predictions more trustworthy and useful. Well-calibrated probabilities allow better decision-making, like knowing when to trust a model's prediction or how to weigh risks.
Click to reveal answer
intermediate
Name two common methods for probability calibration.
Two common methods are Platt Scaling, which fits a logistic regression on the model's scores, and Isotonic Regression, which fits a flexible non-decreasing function to adjust probabilities.
Click to reveal answer
intermediate
What does a perfectly calibrated model's reliability diagram look like?
It is a diagonal line from bottom-left to top-right, meaning predicted probabilities match observed frequencies exactly. For example, predictions of 0.8 correspond to events happening 80% of the time.
Click to reveal answer
intermediate
How can you check if a model's probabilities are well calibrated?
You can use calibration plots (reliability diagrams) or metrics like the Brier score, which measures the mean squared difference between predicted probabilities and actual outcomes.
Click to reveal answer
What does probability calibration adjust in a model?
✗ Incorrect
Probability calibration adjusts the predicted probabilities so they better reflect the true likelihood of events.
Which method fits a logistic regression to calibrate probabilities?
✗ Incorrect
Platt Scaling fits a logistic regression model on the original model's scores to calibrate probabilities.
What shape does a reliability diagram have for a perfectly calibrated model?
✗ Incorrect
A diagonal line means predicted probabilities match observed frequencies exactly.
Which metric measures the mean squared difference between predicted probabilities and actual outcomes?
✗ Incorrect
The Brier score measures how close predicted probabilities are to actual outcomes.
Why might a model with high accuracy still need probability calibration?
✗ Incorrect
High accuracy does not guarantee that predicted probabilities are well calibrated.
Explain in your own words what probability calibration is and why it matters.
Think about how confident a model's predictions should be to match reality.
You got /3 concepts.
Describe two methods used for probability calibration and how they differ.
One fits a logistic curve, the other fits a flexible shape.
You got /3 concepts.