0
0
Kubernetesdevops~10 mins

Alerting with Prometheus Alertmanager in Kubernetes - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to specify the Alertmanager service in Prometheus configuration.

Kubernetes
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['[1]']
Drag options to blanks, or click blank then click option'
Aalertmanager:9093
Bprometheus:9090
Cnode-exporter:9100
Dgrafana:3000
Attempts:
3 left
💡 Hint
Common Mistakes
Using Prometheus port 9090 instead of Alertmanager port 9093.
Using unrelated service names like Grafana or node-exporter.
2fill in blank
medium

Complete the Alertmanager configuration to set the receiver name.

Kubernetes
route:
  receiver: '[1]'
Drag options to blanks, or click blank then click option'
Adefault
Bprometheus
Cemail-notifications
Dslack-alerts
Attempts:
3 left
💡 Hint
Common Mistakes
Using a receiver name not defined in the configuration.
Confusing receiver names with service names.
3fill in blank
hard

Fix the error in the Alertmanager receiver configuration to send alerts to Slack.

Kubernetes
receivers:
- name: 'slack-alerts'
  slack_configs:
  - api_url: '[1]'
    channel: '#alerts'
Drag options to blanks, or click blank then click option'
Ahttp://localhost:9093
Bhttps://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
Csmtp://user:pass@mailserver
Dhttp://alertmanager:9093
Attempts:
3 left
💡 Hint
Common Mistakes
Using local or HTTP URLs instead of Slack webhook URLs.
Confusing Slack webhook URL with SMTP or Alertmanager URLs.
4fill in blank
hard

Fill both blanks to create a Prometheus alert rule that triggers when CPU usage is high.

Kubernetes
groups:
- name: cpu_alerts
  rules:
  - alert: HighCPUUsage
    expr: sum(rate(container_cpu_usage_seconds_total[1][5m])) by (instance) [2] 0.8
    for: 2m
    labels:
      severity: warning
Drag options to blanks, or click blank then click option'
Acontainer="myapp"
B>
C<
Djob="kubelet"
Attempts:
3 left
💡 Hint
Common Mistakes
Using incorrect label selectors or operators.
Forgetting to wrap label selectors in curly braces.
5fill in blank
hard

Fill all three blanks to define an Alertmanager route that sends critical alerts to email and slack receivers.

Kubernetes
route:
  group_by: ['alertname']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 3h
  receiver: '[1]'
  routes:
  - match:
      severity: '[2]'
    receiver: '[3]'
Drag options to blanks, or click blank then click option'
Adefault
Bcritical
Cemail-notifications
Dslack-alerts
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing receiver names or severity labels incorrectly.
Using receivers not defined in the configuration.