What is the main goal of prediction distribution monitoring in an ML system?
Think about what can cause a model to perform worse after deployment.
Prediction distribution monitoring helps detect when the prediction distribution in production differs from training data, which can degrade performance.
Given the following output from a drift detection tool monitoring prediction probabilities, what does it indicate?
{"drift_detected": true, "p_value": 0.01, "metric": "kolmogorov_smirnov"}Recall that a low p-value means strong evidence against the null hypothesis.
A p-value of 0.01 means the distribution difference is statistically significant, so drift is detected.
Which configuration snippet correctly sets up a monitoring job to track prediction probability distribution using a Kolmogorov-Smirnov test every hour?
Focus on frequency, metric type, and data source relevant to prediction distribution.
Option D correctly configures hourly monitoring of prediction probabilities using Kolmogorov-Smirnov test with a reasonable alert threshold.
An ML engineer notices no alerts are triggered despite clear changes in prediction distribution. Which is the most likely cause?
Consider how alert thresholds affect sensitivity.
If the alert threshold is too high, small but important distribution changes won't trigger alerts.
What is the correct order of steps to implement prediction distribution monitoring for a deployed ML model?
Think about what you need before deploying monitoring and alerting.
First collect baseline data, then define tests and thresholds, next deploy monitoring, and finally set up alerts and dashboards.