Complete the code to specify the Alertmanager service in Prometheus configuration.
alerting:
alertmanagers:
- static_configs:
- targets: ['[1]']The Alertmanager service usually runs on port 9093 and is referenced by its service name in Kubernetes, such as 'alertmanager:9093'.
Complete the Alertmanager configuration to set the receiver name.
route: receiver: '[1]'
The 'default' receiver is commonly used as the main receiver in Alertmanager routing.
Fix the error in the Alertmanager receiver configuration to send alerts to Slack.
receivers: - name: 'slack-alerts' slack_configs: - api_url: '[1]' channel: '#alerts'
The Slack webhook URL must be a valid HTTPS URL provided by Slack for incoming webhooks.
Fill both blanks to create a Prometheus alert rule that triggers when CPU usage is high.
groups:
- name: cpu_alerts
rules:
- alert: HighCPUUsage
expr: sum(rate(container_cpu_usage_seconds_total[1][5m])) by (instance) [2] 0.8
for: 2m
labels:
severity: warningThe expression filters CPU usage for the container named 'myapp' and triggers if usage is greater than 0.8.
Fill all three blanks to define an Alertmanager route that sends critical alerts to email and slack receivers.
route: group_by: ['alertname'] group_wait: 30s group_interval: 5m repeat_interval: 3h receiver: '[1]' routes: - match: severity: '[2]' receiver: '[3]'
The main route uses 'default' receiver, and critical alerts are matched to 'email-notifications' receiver.