Scan the symptom column for your error. The fix column is the first thing to try, not the only thing.
Compilation errors
Compilation Error: 'dbt_utils' is undefined
You referenced a macro from a package that is not installed.
Fix:
dbt depsto install packages declared inpackages.yml.- Verify the package is in
packages.yml(not just mentioned in a macro). - Case-sensitive:
dbt_utils.staris notdbt_utils.STAR. - For custom macros, confirm the file is in
macros/with a.sqlextension.
depends on a node named X which was not found
You ref()ed a model that does not exist.
Fix:
dbt ls --select <model_name>to check whether dbt can see it.- Check for typos in
{{ ref('…') }}. - For cross-package refs:
{{ ref('package_name', 'model_name') }}. - If you renamed or deleted a model, search the project for old references.
Expected an expression, got 'end of statement block'
Jinja syntax is off.
Fix:
- Check matching
{% if %}/{% endif %},{% for %}/{% endfor %}. - Watch for
{{ … }}vs.{% … %}confusion. dbt compile --select <model>to see the expansion and the line it broke on.
Error reading <file>.yml — Invalid YAML
YAML indentation or special-character issue.
Fix:
- Spaces, not tabs.
- Space after every colon:
name: value, notname:value. - Quote strings containing
:,{,[. - Pipe through a YAML validator if you cannot see the problem.
Database errors
TABLE_OR_VIEW_NOT_FOUND
The upstream does not exist, was dropped, or is in a different catalog/schema.
Fix:
- Run upstream first:
dbt run --select +<failed_model>. - Verify catalog/schema in the profile matches the target (
dbt debug). - Verify Unity Catalog grants:
USE CATALOG,USE SCHEMA,SELECT. - On a fresh env, always seed with
dbt buildin dependency order.
PERMISSION_DENIED
The service principal or user lacks a grant.
Fix:
GRANT USE CATALOG ON CATALOG prod TO `dbt-service-principal`;
GRANT USE SCHEMA ON SCHEMA prod.bronze TO `dbt-service-principal`;
GRANT SELECT ON TABLE prod.bronze.raw_events TO `dbt-service-principal`;
GRANT CREATE TABLE ON SCHEMA prod.silver TO `dbt-service-principal`;
STATEMENT_TIMEOUT
SQL query exceeded the warehouse timeout.
Fix:
- Look at the compiled SQL. Is there an unbounded join? A missing partition filter?
- For incremental: is the
is_incremental()filter narrowing effectively? - Break into intermediate models; reduce join breadth.
- Last resort: increase warehouse size or client timeout.
Warning
"Bigger warehouse" is an anti-fix. It hides slow SQL that will cost you more next month when the data grows. Always investigate the query plan before bumping size.
DELTA_MISSING_COLUMN
A column you reference no longer exists in the Delta table.
Fix:
- Check compiled SQL for the missing column.
- If upstream was rewritten:
--full-refreshthe downstream model with the updated SQL. - If
select *is expanding to the missing column, list columns explicitly.
MERGE_CARDINALITY_VIOLATION
Your incremental model's unique_key has duplicates on the source side within the merge batch.
Fix: Deduplicate in the model before the merge:
with ranked as (
select *,
row_number() over (
partition by order_id
order by _loaded_at desc
) as rn
from {{ ref('stg_orders') }}
)
select * except(rn) from ranked where rn = 1
Or switch to incremental_strategy: 'delete+insert' which is more tolerant of source duplicates.
SCHEMA_CHANGE_NOT_ALLOWED
Incremental model's SQL output does not match the target's schema.
Fix:
- Check
on_schema_changein the model'sconfig():append_new_columnshandles additive changes.sync_all_columnshandles additive + removals.
- For type changes:
--full-refresh. dbt will not reconcile those live.
WAREHOUSE_NOT_RUNNING
The SQL Warehouse is stopped.
Fix:
- Start the warehouse in the Databricks UI.
- Check auto-suspend settings (default is 30 minutes of inactivity).
- For production workloads, set
min_num_clusters: 1so the warehouse stays warm during scheduled runs.
Incremental model errors
Incremental is slower than table
Your is_incremental() block is not narrowing the source scan.
Fix:
Verify the predicate uses a column the source is sorted or partitioned on, and anchors to {{ this }}:
where updated_at >= (
select coalesce(max(updated_at), '1900-01-01') from {{ this }}
)
Not:
-- Wrong: reads the whole source, then filters in memory
where updated_at > current_date - interval 1 day
Rows disappearing after a delayed run
Your is_incremental() filter is anchored to wall-clock time instead of max(...) from the target. A late run misses the gap.
Fix: Always anchor to the target's current maximum watermark.
Full refresh decision matrix
| Scenario | Full refresh needed? |
|---|---|
Additive column upstream, on_schema_change: append_new_columns | No |
| Column type changed | Yes |
Column removed, on_schema_change: sync_all_columns | No |
| Column removed, anything else | Yes |
| Incremental logic bug fix | Yes |
| Historical source data corrected | Yes, or targeted replace_where |
| Normal nightly run | No |
Package errors
Could not find a version of package X that matches supplied criteria
Version constraints in packages.yml are incompatible.
Fix:
- Loosen the bound to a range that overlaps:
">=1.0.0"instead of"==1.0.3". dbt deps --upgradeto resolve latest compatible.- If two packages genuinely conflict, one of them needs an update upstream.
git error: Authentication failed
A private package repo needs auth.
Fix:
- Switch to SSH:
git@github.com:org/package.git. - Or set
DBT_ENV_SECRET_GIT_CREDENTIALfor HTTPS. - Confirm the CI runner has credentials loaded.
Connection errors
Could not find profile named 'X'
Profile name mismatch or profiles.yml not where dbt looked.
Fix:
profiles.ymllives at~/.dbt/profiles.ymlunless--profiles-diroverrides.- The key in
profiles.ymlmust matchprofile:indbt_project.yml.
Connection refused or Connection timeout
Fix:
- Verify
DATABRICKS_HOSTis correct (no trailing slash, no protocol). - Warehouse is running and has capacity.
- Network: VPN, security group, firewall.
dbt debugfor a concise diagnostic.
Snapshot errors
'updated_at' column does not exist
Snapshot strategy timestamp requires the column it is configured to watch.
Fix:
- Verify the column is in the source.
- If the source renamed it, update
updated_at: new_column_namein the snapshot config. - For sources without a reliable timestamp, switch to
strategy: checkand list the columns to watch.
Snapshot table growing unboundedly
invalidate_hard_deletes is off and source is deleting rows.
Fix:
snapshots:
my_project:
+invalidate_hard_deletes: true
Diagnostic commands
# Test everything
dbt debug
# Parse errors only (no warehouse needed)
dbt parse
# Which models errored last run
cat target/run_results.json | \
jq -r '.results[] | select(.status=="error") | .unique_id'
# Blast radius of a broken model
dbt ls --select <model>+ --output name | wc -l
# What changed vs. prod
dbt ls --select state:modified --state ./prod-manifest/
See also
- Failure triage guide — the 5-minute procedure when a run fails in prod.
- Incremental models guide — prevent the majority of incremental failures.
- CLI commands — selector syntax for targeted re-runs.