0
0
MLOpsdevops~5 mins

Alert thresholds and policies in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Alert thresholds and policies help you get notified when something important happens in your machine learning system. They watch key numbers and send alerts if those numbers go too high or too low, so you can fix problems quickly.
When you want to know if your model's accuracy drops below a certain level after deployment
When you need to be alerted if the data input to your model changes unexpectedly
When you want to monitor resource usage like CPU or memory during model training and get notified if it exceeds limits
When you want to track if your model's prediction latency becomes too slow
When you want to automate responses to certain conditions by linking alerts to actions
Config File - alert_policy.yaml
alert_policy.yaml
alert_policies:
  - name: "Model Accuracy Drop"
    metric: "model_accuracy"
    threshold: 0.85
    comparison: "less_than"
    severity: "critical"
    notification_channels:
      - "email"
      - "slack"
  - name: "Data Drift Detected"
    metric: "input_data_drift"
    threshold: 0.1
    comparison: "greater_than"
    severity: "warning"
    notification_channels:
      - "email"
  - name: "High CPU Usage"
    metric: "cpu_usage"
    threshold: 80
    comparison: "greater_than"
    severity: "critical"
    notification_channels:
      - "pagerduty"

This YAML file defines alert policies for monitoring your ML system.

  • name: The alert's name for easy identification.
  • metric: The metric to watch, like model accuracy or CPU usage.
  • threshold: The value that triggers the alert.
  • comparison: How to compare the metric to the threshold (less_than or greater_than).
  • severity: How serious the alert is (warning or critical).
  • notification_channels: Where to send alerts, such as email, Slack, or PagerDuty.
Commands
This command creates alert policies in MLflow using the configuration file. It sets up the system to watch the specified metrics and send notifications when thresholds are crossed.
Terminal
mlflow alerts create --file alert_policy.yaml
Expected OutputExpected
Alert policies created successfully: Model Accuracy Drop, Data Drift Detected, High CPU Usage
--file - Specifies the alert policy configuration file to use
This command lists all active alert policies so you can verify they were created correctly.
Terminal
mlflow alerts list
Expected OutputExpected
ID Name Metric Threshold Comparison Severity 1 Model Accuracy Drop model_accuracy 0.85 less_than critical 2 Data Drift Detected input_data_drift 0.1 greater_than warning 3 High CPU Usage cpu_usage 80 greater_than critical
This command tests the alert policy named 'Model Accuracy Drop' by simulating a metric value of 0.80, which is below the threshold, to check if the alert triggers correctly.
Terminal
mlflow alerts test --name "Model Accuracy Drop" --metric-value 0.80
Expected OutputExpected
Alert triggered: Model Accuracy Drop (model_accuracy = 0.80 < 0.85) Severity: critical Notifications sent to: email, slack
--name - Specifies which alert policy to test
--metric-value - Simulates the metric value for testing the alert
Key Concept

If you remember nothing else from this pattern, remember: alert thresholds watch important metrics and notify you immediately when values cross set limits.

Common Mistakes
Setting thresholds too tight or too loose without testing
This causes too many false alerts or missed important issues, making alerts useless or annoying.
Test alert policies with realistic metric values and adjust thresholds to balance sensitivity and noise.
Not specifying notification channels correctly
Alerts won't reach the right people or systems, so problems go unnoticed.
Always include valid notification channels like email or Slack in the alert policy.
Forgetting to list or verify alert policies after creation
You might think alerts are active when they are not, missing critical notifications.
Use the list command to confirm alert policies are created and active.
Summary
Create alert policies using a YAML file that defines metrics, thresholds, and notification channels.
Use CLI commands to create, list, and test alert policies to ensure they work as expected.
Alert thresholds help catch problems early by notifying you when key metrics cross set limits.