0
0
Kubernetesdevops~20 mins

Prometheus for metrics collection in Kubernetes - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Prometheus Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
💻 Command Output
intermediate
2:00remaining
Prometheus Query Result Interpretation
You run the Prometheus query rate(http_requests_total[5m]) to monitor HTTP requests per second over the last 5 minutes. What type of output do you expect from this query?
AA single integer representing the total number of HTTP requests since the server started.
BA time series showing the per-second rate of HTTP requests averaged over the last 5 minutes.
CA list of all HTTP request logs collected in the last 5 minutes.
DA histogram showing the distribution of HTTP request sizes.
Attempts:
2 left
💡 Hint
Think about what the rate() function does in Prometheus queries.
Configuration
intermediate
2:00remaining
Prometheus Scrape Configuration
You want Prometheus to scrape metrics from a Kubernetes service named my-app in the default namespace every 15 seconds. Which scrape configuration snippet correctly achieves this?
A
scrape_configs:
  - job_name: 'my-app'
    file_sd_configs:
      - files: ['my-app.yaml']
    scrape_interval: 15s
B
scrape_configs:
  - job_name: 'my-app'
    static_configs:
      - targets: ['my-app.default.svc.cluster.local:9090']
    scrape_interval: 15s
C
scrape_configs:
  - job_name: 'my-app'
    kubernetes_sd_configs:
      - role: endpoints
    relabel_configs:
      - source_labels: [__meta_kubernetes_service_name]
        action: keep
        regex: my-app
    scrape_interval: 15s
D
scrape_configs:
  - job_name: 'my-app'
    kubernetes_sd_configs:
      - role: pod
    scrape_interval: 15s
Attempts:
2 left
💡 Hint
Use Kubernetes service discovery and filter by service name.
🔀 Workflow
advanced
3:00remaining
Prometheus Alerting Workflow
You want to create an alert that fires when the CPU usage of any pod exceeds 80% for 5 minutes. Which sequence of steps correctly describes the workflow to achieve this?
A
1. Create a Kubernetes pod annotation for CPU limit.
2. Restart Prometheus.
3. Configure Grafana dashboard.
4. Send email manually when CPU is high.
B
1. Configure Prometheus to scrape node metrics.
2. Write a query for disk usage.
3. Restart Kubernetes cluster.
4. Configure Alertmanager to silence alerts.
C
1. Write a Prometheus query to list pods.
2. Use kubectl to scale pods.
3. Restart Alertmanager.
4. Configure Prometheus scrape interval.
D
1. Write a Prometheus alerting rule with a query checking CPU usage > 80% for 5 minutes.
2. Add the rule to Prometheus alerting rules file.
3. Reload Prometheus configuration.
4. Configure Alertmanager to handle the alert notifications.
Attempts:
2 left
💡 Hint
Focus on alerting rules and notification setup.
Troubleshoot
advanced
2:30remaining
Prometheus Metrics Missing from Target
You notice Prometheus is not scraping metrics from a pod exposing metrics on port 9100, but the pod is healthy and reachable. What is the most likely cause?
AThe pod's metrics endpoint is not labeled or annotated correctly for Prometheus to discover it.
BPrometheus server is down and not running.
CThe pod is running on a node without network connectivity.
DThe pod's container image is outdated.
Attempts:
2 left
💡 Hint
Check how Prometheus discovers scrape targets in Kubernetes.
Best Practice
expert
3:00remaining
Efficient Prometheus Metrics Collection Strategy
Which approach is the best practice to minimize Prometheus load while collecting metrics from a large Kubernetes cluster?
AUse service discovery with relabeling to scrape only necessary pods and endpoints, and increase scrape intervals for less critical metrics.
BScrape all pods every 5 seconds to ensure real-time data accuracy.
CDisable service discovery and manually list all pod IPs in static configs.
DRun multiple Prometheus servers scraping the same targets simultaneously without coordination.
Attempts:
2 left
💡 Hint
Think about selective scraping and scrape frequency.