How dbt works (SQL + Jinja + YAML) - Performance & Efficiency
We want to understand how the time dbt takes to run grows as the size of data or number of models increases.
Specifically, how does dbt's combination of SQL, Jinja, and YAML affect execution time?
Analyze the time complexity of this dbt model snippet.
-- models/example_model.sql
{{ config(materialized='table') }}
select
user_id,
count(*) as total_orders
from {{ ref('orders') }}
where order_date >= '{{ var("start_date") }}'
group by user_id
This code runs a SQL query with Jinja templating and YAML config to build a table from a referenced model.
Look at what repeats as input grows.
- Primary operation: Scanning and grouping rows in the referenced table.
- How many times: Once per run, but the scan touches every row matching the date filter.
As the number of rows in the orders table grows, the query scans more data.
| Input Size (n rows) | Approx. Operations |
|---|---|
| 10 | 10 rows scanned and grouped |
| 100 | 100 rows scanned and grouped |
| 1000 | 1000 rows scanned and grouped |
Pattern observation: The work grows roughly in direct proportion to the number of rows scanned.
Time Complexity: O(n)
This means the time grows linearly with the number of rows processed in the SQL query.
[X] Wrong: "dbt runs all models instantly regardless of data size because it just runs SQL."
[OK] Correct: The SQL query inside dbt still processes data, so bigger tables mean more work and longer run times.
Understanding how dbt runs SQL with templating helps you explain data pipeline performance clearly and confidently.
"What if the model used a more complex join instead of a simple filter? How would the time complexity change?"