What does model drift mean in the context of monitoring machine learning models?
Think about what happens when the data the model sees changes after deployment.
Model drift happens when the data the model receives changes from what it was trained on, leading to worse predictions.
You run a monitoring script that outputs the following line:
Warning: Model accuracy dropped from 0.92 to 0.75 in last 24 hours
What does this output indicate?
Look at the numbers and the word 'dropped'.
The output shows the accuracy decreased from 0.92 to 0.75, which means the model is performing worse.
You want to create an automated alert that triggers when model precision falls below 0.8. Which step should you include in your monitoring workflow?
Think about what data you need to calculate precision and how often.
To alert on precision, you must collect prediction results and compute precision regularly to detect drops.
Your monitoring dashboard shows a sudden drop in model recall. Which of the following is the most likely cause?
Recall measures how many true positives the model finds. What could affect this?
A shift in input data distribution can cause the model to miss more true positives, lowering recall.
Which metric is best to monitor for a fraud detection model where false negatives are very costly?
False negatives mean fraud cases missed by the model. Which metric focuses on catching positives?
Recall measures how many actual fraud cases the model detects, so it's critical when missing fraud is costly.