Test severity levels in dbt - Time & Space Complexity
When using test severity levels in dbt, it's important to understand how the number of tests affects the time it takes to run them.
We want to know how the execution time grows as we add more tests with different severity levels.
Analyze the time complexity of running dbt tests with severity levels.
# Example dbt test configuration with severity
version: 2
models:
- name: customers
tests:
- unique:
severity: error
- not_null:
severity: warn
This snippet shows two tests on a model, each with a severity level that controls how dbt treats failures.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Running each test query on the data model.
- How many times: Once per test defined for the model.
As you add more tests, the total time to run all tests grows roughly in direct proportion.
| Number of Tests (n) | Approx. Operations |
|---|---|
| 10 | 10 test queries run |
| 100 | 100 test queries run |
| 1000 | 1000 test queries run |
Pattern observation: Doubling the number of tests roughly doubles the work done.
Time Complexity: O(n)
This means the time to run tests grows linearly with the number of tests you have.
[X] Wrong: "Severity levels change how many tests run, so time complexity changes."
[OK] Correct: Severity only changes how failures are reported, not how many tests run. All tests still execute, so time grows with test count.
Understanding how test counts affect runtime helps you design efficient data quality checks and explain your choices clearly in discussions.
"What if we grouped tests to run in parallel? How would that affect the time complexity?"