Bird
0
0

You want to optimize a large fact table in Snowflake using dbt. Which combination of warehouse-specific optimizations is best to improve query speed and reduce costs?

hard🚀 Application Q15 of 15
dbt - Performance Optimization
You want to optimize a large fact table in Snowflake using dbt. Which combination of warehouse-specific optimizations is best to improve query speed and reduce costs?
AUse full refresh materialization and no clustering
BUse clustering keys on date columns and incremental materialization with merge strategy
CUse ephemeral models only without materialization
DUse incremental materialization without unique_key and no clustering
Step-by-Step Solution
Solution:
  1. Step 1: Identify best practices for large Snowflake fact tables

    Clustering keys on date columns improve pruning and query speed; incremental with merge avoids full reloads.
  2. Step 2: Evaluate options

    Use clustering keys on date columns and incremental materialization with merge strategy combines clustering and incremental with merge, which is optimal. Others miss clustering or unique_key or use inefficient materialization.
  3. Final Answer:

    Use clustering keys on date columns and incremental materialization with merge strategy -> Option B
  4. Quick Check:

    Clustering + incremental merge = best for large tables [OK]
Quick Trick: Combine clustering and incremental with merge for big tables [OK]
Common Mistakes:
MISTAKES
  • Skipping clustering on large tables
  • Using full refresh unnecessarily
  • Omitting unique_key in incremental

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More dbt Quizzes