Airflow runs tasks and updates internal metrics about their states. It exposes these metrics on a special HTTP endpoint. Prometheus regularly scrapes this endpoint to collect the latest metrics data. When a task starts, Airflow updates metrics internally, but Prometheus only sees these updates after scraping. When the task completes, Airflow updates metrics again, and Prometheus collects the updated data on the next scrape. Users can query this data in Prometheus or visualize it in Grafana dashboards. The key is that Airflow must expose a metrics endpoint for Prometheus to collect metrics. Without it, Prometheus cannot gather any data. This flow repeats continuously as Airflow runs tasks and Prometheus scrapes metrics.