Imagine you deployed a machine learning model that predicts customer churn. Why is it important to monitor this model after deployment?
Think about how real-world data can change after deployment.
Monitoring is crucial because data in the real world can change, causing the model to perform worse. Detecting this early helps maintain good predictions.
You need to deploy a model on a device with limited memory and processing power. Which model type is best suited for this production environment?
Think about model size and speed for devices with limited resources.
Simple models like linear regression or small decision trees use less memory and compute, making them suitable for limited devices.
After deploying a classification model, you observe the following confusion matrix on new data:
True Positive: 80
False Positive: 20
True Negative: 900
False Negative: 100
What is the precision of the model on this data?
Precision = True Positives / (True Positives + False Positives)
Precision measures how many predicted positives are actually correct. Here, precision = 80 / (80 + 20) = 0.80.
Your deployed model's accuracy suddenly drops. You suspect data drift. Which of the following is the most likely cause?
Think about what data drift means in production.
Data drift means the new input data looks different from what the model saw during training, causing performance to drop.
You want to deploy a neural network model that is stable and less likely to overfit in production. Which hyperparameter setting helps achieve this?
Think about techniques that prevent overfitting and improve generalization.
Dropout randomly disables neurons during training, which helps the model generalize better and be more stable in production.