0
0
dbtdata~15 mins

Test severity levels in dbt - Deep Dive

Choose your learning style9 modes available
Overview - Test severity levels
What is it?
Test severity levels in dbt are labels that tell how serious a test failure is. They help decide what happens when a test fails, like stopping the process or just warning. This lets teams handle data quality problems in a way that fits their needs. Without severity levels, all test failures would be treated the same, making it hard to prioritize fixes.
Why it matters
Severity levels let data teams focus on the most important problems first. If every test failure stopped everything, small issues could block progress. If no failures stopped anything, big problems might be ignored. Severity levels balance this by marking tests as errors or warnings, so teams can act smartly and keep data trustworthy without slowing down work.
Where it fits
Before learning test severity levels, you should understand dbt tests and how they check data quality. After this, you can learn about test result handling, notifications, and how to build data monitoring workflows that react differently based on severity.
Mental Model
Core Idea
Test severity levels classify test failures by importance to guide how dbt reacts to data quality issues.
Think of it like...
It's like traffic lights for data tests: red means stop and fix immediately, yellow means caution and check soon, green means all clear.
┌───────────────┐
│   dbt Tests   │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Severity Level│
│ ┌───────────┐ │
│ │ Error     │ │
│ │ Warning   │ │
│ └───────────┘ │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Action Taken  │
│ ┌───────────┐ │
│ │ Fail Run  │ │
│ │ Warn Only │ │
│ └───────────┘ │
└───────────────┘
Build-Up - 7 Steps
1
FoundationUnderstanding dbt Tests Basics
🤔
Concept: Learn what dbt tests are and how they check data quality.
dbt tests are simple checks you add to your data models. They look for problems like missing values or duplicates. When you run dbt, it runs these tests and tells you if your data passes or fails.
Result
You know how to add and run basic tests in dbt to check data quality.
Understanding tests as quality checks is the base for knowing why severity matters.
2
FoundationWhat Happens When Tests Fail
🤔
Concept: Discover the default behavior of dbt when a test fails.
By default, if any dbt test fails, the whole run stops and reports an error. This means no further steps happen until the problem is fixed.
Result
You see that test failures block progress by default.
Knowing the default stop-on-failure behavior shows why controlling severity is useful.
3
IntermediateIntroducing Severity Levels in dbt Tests
🤔Before reading on: do you think all dbt test failures must stop the run, or can some just warn? Commit to your answer.
Concept: Severity levels let you mark tests as errors or warnings to control failure impact.
dbt allows you to set severity for tests: 'error' means stop the run on failure, 'warn' means show a warning but continue. You set this in the test config with severity: 'error' or 'warn'.
Result
You can now mark tests to either fail the run or just warn without stopping.
Understanding severity lets you balance strictness and flexibility in data quality checks.
4
IntermediateConfiguring Severity Levels in dbt
🤔Before reading on: do you think severity is set globally or per test? Commit to your answer.
Concept: Learn how to set severity levels per test or globally in dbt project files.
You can set severity inside each test's config block or in the dbt_project.yml file for groups of tests. For example, in a test: config(severity='warn') means it warns on failure. Globally, you can set default severity for all tests.
Result
You know how to apply severity settings to control test behavior at different scopes.
Knowing configuration options helps tailor test reactions to your project's needs.
5
IntermediateInterpreting Test Results by Severity
🤔Before reading on: do you think warnings appear in the same place as errors in dbt output? Commit to your answer.
Concept: Understand how dbt reports errors and warnings differently after a run.
When dbt runs, errors stop the run and show as failures. Warnings appear in the logs but do not stop the run. You can see warnings in the test results table with a 'warn' status, helping you spot issues without blocking.
Result
You can read dbt output and know which issues are critical and which are warnings.
Distinguishing errors from warnings in output helps prioritize fixes effectively.
6
AdvancedUsing Severity Levels in Production Workflows
🤔Before reading on: do you think warnings should be ignored in production, or monitored closely? Commit to your answer.
Concept: Learn how severity levels fit into real data pipeline monitoring and alerting.
In production, errors usually trigger alerts and stop deployments to prevent bad data. Warnings might trigger notifications for review but allow pipelines to continue. Teams use severity to balance reliability and speed, fixing errors fast and tracking warnings over time.
Result
You understand how severity levels help manage data quality in live systems.
Knowing how severity guides alerting and pipeline control is key for robust data operations.
7
ExpertAdvanced Severity: Customizing Behavior and Extensions
🤔Before reading on: do you think dbt allows custom severity levels beyond error and warn? Commit to your answer.
Concept: Explore how to extend or customize severity handling with hooks, macros, or external tools.
While dbt natively supports 'error' and 'warn', advanced users create custom macros or use orchestration tools to treat warnings differently, like auto-creating tickets or delaying alerts. Some integrate severity with external monitoring systems for richer workflows.
Result
You see how to build sophisticated data quality responses beyond built-in severity.
Understanding extensibility of severity handling unlocks powerful, tailored data quality systems.
Under the Hood
dbt runs tests as SQL queries against your data warehouse. Each test returns rows that fail the condition. dbt collects these results and checks the severity setting. If severity is 'error' and failures exist, dbt stops the run and reports failure. If severity is 'warn', dbt logs the failure but continues. Internally, severity is a config flag that controls the run's exit code and logging behavior.
Why designed this way?
Originally, dbt treated all test failures as errors to ensure strict data quality. But teams needed flexibility to handle non-critical issues without blocking workflows. Adding severity levels allowed a simple, clear way to mark tests by importance without complex custom code. This design balances safety and agility.
┌───────────────┐
│   Run Tests   │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Execute SQL   │
│ Test Queries  │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Collect Fail  │
│ Rows         │
└──────┬────────┘
       │
       ▼
┌───────────────┐
│ Check Severity│
│ Config       │
└──────┬────────┘
       │
 ┌─────┴─────┐
 │           │
 ▼           ▼
Error      Warning
 │           │
Stop Run   Log Warning
 │           │
┌▼─────────▼┐
│ Exit Code │
│ & Logs    │
└───────────┘
Myth Busters - 4 Common Misconceptions
Quick: Does setting severity to 'warn' mean dbt ignores the test failure completely? Commit yes or no.
Common Belief:If a test has severity 'warn', dbt ignores failures and they don't matter.
Tap to reveal reality
Reality:Severity 'warn' means dbt continues running but still reports the failure as a warning for review.
Why it matters:Ignoring warnings can let data quality issues build up unnoticed, causing bigger problems later.
Quick: Can you set severity levels dynamically during a dbt run? Commit yes or no.
Common Belief:Severity levels can change automatically based on data or environment during a run.
Tap to reveal reality
Reality:Severity is static per test config and cannot change dynamically during a run without custom code.
Why it matters:Expecting dynamic severity without custom logic can lead to confusion and missed alerts.
Quick: Does dbt treat all test failures as equally critical by default? Commit yes or no.
Common Belief:By default, all test failures stop the run and are treated as errors.
Tap to reveal reality
Reality:Yes, by default all tests have severity 'error' unless configured otherwise.
Why it matters:Knowing the default helps avoid unexpected pipeline failures.
Quick: Can you create custom severity levels like 'info' or 'critical' in dbt natively? Commit yes or no.
Common Belief:dbt supports many custom severity levels beyond 'error' and 'warn'.
Tap to reveal reality
Reality:dbt only supports 'error' and 'warn' natively; others require custom extensions.
Why it matters:Assuming more levels exist natively can cause misconfiguration and missed alerts.
Expert Zone
1
Severity 'warn' tests still affect the test results table and can be used to track data quality trends over time.
2
Some teams use severity 'warn' tests as early warnings to catch issues before they become errors, integrating them into monitoring dashboards.
3
Combining severity with test tags and custom macros allows flexible grouping and handling of tests in complex projects.
When NOT to use
Severity levels are not a substitute for fixing data issues; they only control reaction. For critical data pipelines, relying on warnings without fixing errors can cause data corruption. Alternatives include building custom alerting systems or using external data quality tools for more granular control.
Production Patterns
In production, teams often set core data integrity tests as 'error' to block runs on failure, while less critical tests like freshness or completeness use 'warn'. They integrate dbt test results with monitoring tools like Airflow or PagerDuty to automate alerts based on severity.
Connections
Software Testing Severity Levels
Same pattern of classifying test failures by importance to guide response.
Understanding severity in software tests helps grasp why dbt uses similar levels to manage data quality failures.
Incident Management in IT Operations
Severity levels in dbt tests relate to incident priority levels in IT systems.
Knowing how IT teams prioritize incidents by severity clarifies how data teams prioritize test failures.
Traffic Control Systems
Severity levels function like traffic signals controlling flow based on risk.
Seeing severity as traffic control helps understand balancing safety and flow in data pipelines.
Common Pitfalls
#1Treating all test failures as errors and stopping runs unnecessarily.
Wrong approach:tests: - name: unique_customer_id config: severity: error - name: freshness_check config: severity: error
Correct approach:tests: - name: unique_customer_id config: severity: error - name: freshness_check config: severity: warn
Root cause:Not using severity levels to differentiate critical and non-critical tests causes pipeline delays.
#2Ignoring warnings because they don't stop the run.
Wrong approach:# No action taken on warnings # Team assumes warnings are not important
Correct approach:# Monitor warnings regularly # Set alerts or tickets for warnings to investigate
Root cause:Misunderstanding that warnings still indicate real data issues needing attention.
#3Trying to set severity dynamically inside a test SQL query.
Wrong approach:select * from table where condition -- Attempting to change severity here dynamically
Correct approach:config(severity='warn') select * from table where condition
Root cause:Confusing test logic with configuration settings leads to invalid attempts to change severity.
Key Takeaways
Test severity levels in dbt let you mark failures as errors or warnings to control pipeline behavior.
Errors stop the run and demand immediate fixes, while warnings alert without blocking progress.
Setting severity per test or globally helps balance strictness and flexibility in data quality checks.
Understanding severity improves how you read test results and prioritize data issues.
Advanced users extend severity handling with custom macros and integrations for richer workflows.