Scan the symptom column for your error. The fix column is the first thing to try, not the only thing.

Compilation errors

Compilation Error: 'dbt_utils' is undefined

You referenced a macro from a package that is not installed.

Fix:

  1. dbt deps to install packages declared in packages.yml.
  2. Verify the package is in packages.yml (not just mentioned in a macro).
  3. Case-sensitive: dbt_utils.star is not dbt_utils.STAR.
  4. For custom macros, confirm the file is in macros/ with a .sql extension.

depends on a node named X which was not found

You ref()ed a model that does not exist.

Fix:

  1. dbt ls --select <model_name> to check whether dbt can see it.
  2. Check for typos in {{ ref('…') }}.
  3. For cross-package refs: {{ ref('package_name', 'model_name') }}.
  4. If you renamed or deleted a model, search the project for old references.

Expected an expression, got 'end of statement block'

Jinja syntax is off.

Fix:

  1. Check matching {% if %} / {% endif %}, {% for %} / {% endfor %}.
  2. Watch for {{ … }} vs. {% … %} confusion.
  3. dbt compile --select <model> to see the expansion and the line it broke on.

Error reading <file>.yml — Invalid YAML

YAML indentation or special-character issue.

Fix:

  1. Spaces, not tabs.
  2. Space after every colon: name: value, not name:value.
  3. Quote strings containing :, {, [.
  4. Pipe through a YAML validator if you cannot see the problem.

Database errors

TABLE_OR_VIEW_NOT_FOUND

The upstream does not exist, was dropped, or is in a different catalog/schema.

Fix:

  1. Run upstream first: dbt run --select +<failed_model>.
  2. Verify catalog/schema in the profile matches the target (dbt debug).
  3. Verify Unity Catalog grants: USE CATALOG, USE SCHEMA, SELECT.
  4. On a fresh env, always seed with dbt build in dependency order.

PERMISSION_DENIED

The service principal or user lacks a grant.

Fix:

GRANT USE CATALOG ON CATALOG prod TO `dbt-service-principal`;
GRANT USE SCHEMA ON SCHEMA prod.bronze TO `dbt-service-principal`;
GRANT SELECT ON TABLE prod.bronze.raw_events TO `dbt-service-principal`;
GRANT CREATE TABLE ON SCHEMA prod.silver TO `dbt-service-principal`;

STATEMENT_TIMEOUT

SQL query exceeded the warehouse timeout.

Fix:

  1. Look at the compiled SQL. Is there an unbounded join? A missing partition filter?
  2. For incremental: is the is_incremental() filter narrowing effectively?
  3. Break into intermediate models; reduce join breadth.
  4. Last resort: increase warehouse size or client timeout.

Warning

"Bigger warehouse" is an anti-fix. It hides slow SQL that will cost you more next month when the data grows. Always investigate the query plan before bumping size.

DELTA_MISSING_COLUMN

A column you reference no longer exists in the Delta table.

Fix:

  1. Check compiled SQL for the missing column.
  2. If upstream was rewritten: --full-refresh the downstream model with the updated SQL.
  3. If select * is expanding to the missing column, list columns explicitly.

MERGE_CARDINALITY_VIOLATION

Your incremental model's unique_key has duplicates on the source side within the merge batch.

Fix: Deduplicate in the model before the merge:

with ranked as (
    select *,
           row_number() over (
               partition by order_id
               order by _loaded_at desc
           ) as rn
    from {{ ref('stg_orders') }}
)
select * except(rn) from ranked where rn = 1

Or switch to incremental_strategy: 'delete+insert' which is more tolerant of source duplicates.

SCHEMA_CHANGE_NOT_ALLOWED

Incremental model's SQL output does not match the target's schema.

Fix:

  1. Check on_schema_change in the model's config():
    • append_new_columns handles additive changes.
    • sync_all_columns handles additive + removals.
  2. For type changes: --full-refresh. dbt will not reconcile those live.

WAREHOUSE_NOT_RUNNING

The SQL Warehouse is stopped.

Fix:

  1. Start the warehouse in the Databricks UI.
  2. Check auto-suspend settings (default is 30 minutes of inactivity).
  3. For production workloads, set min_num_clusters: 1 so the warehouse stays warm during scheduled runs.

Incremental model errors

Incremental is slower than table

Your is_incremental() block is not narrowing the source scan.

Fix: Verify the predicate uses a column the source is sorted or partitioned on, and anchors to {{ this }}:

where updated_at >= (
    select coalesce(max(updated_at), '1900-01-01') from {{ this }}
)

Not:

-- Wrong: reads the whole source, then filters in memory
where updated_at > current_date - interval 1 day

Rows disappearing after a delayed run

Your is_incremental() filter is anchored to wall-clock time instead of max(...) from the target. A late run misses the gap.

Fix: Always anchor to the target's current maximum watermark.

Full refresh decision matrix

ScenarioFull refresh needed?
Additive column upstream, on_schema_change: append_new_columnsNo
Column type changedYes
Column removed, on_schema_change: sync_all_columnsNo
Column removed, anything elseYes
Incremental logic bug fixYes
Historical source data correctedYes, or targeted replace_where
Normal nightly runNo

Package errors

Could not find a version of package X that matches supplied criteria

Version constraints in packages.yml are incompatible.

Fix:

  1. Loosen the bound to a range that overlaps: ">=1.0.0" instead of "==1.0.3".
  2. dbt deps --upgrade to resolve latest compatible.
  3. If two packages genuinely conflict, one of them needs an update upstream.

git error: Authentication failed

A private package repo needs auth.

Fix:

  1. Switch to SSH: git@github.com:org/package.git.
  2. Or set DBT_ENV_SECRET_GIT_CREDENTIAL for HTTPS.
  3. Confirm the CI runner has credentials loaded.

Connection errors

Could not find profile named 'X'

Profile name mismatch or profiles.yml not where dbt looked.

Fix:

  1. profiles.yml lives at ~/.dbt/profiles.yml unless --profiles-dir overrides.
  2. The key in profiles.yml must match profile: in dbt_project.yml.

Connection refused or Connection timeout

Fix:

  1. Verify DATABRICKS_HOST is correct (no trailing slash, no protocol).
  2. Warehouse is running and has capacity.
  3. Network: VPN, security group, firewall.
  4. dbt debug for a concise diagnostic.

Snapshot errors

'updated_at' column does not exist

Snapshot strategy timestamp requires the column it is configured to watch.

Fix:

  1. Verify the column is in the source.
  2. If the source renamed it, update updated_at: new_column_name in the snapshot config.
  3. For sources without a reliable timestamp, switch to strategy: check and list the columns to watch.

Snapshot table growing unboundedly

invalidate_hard_deletes is off and source is deleting rows.

Fix:

snapshots:
  my_project:
    +invalidate_hard_deletes: true

Diagnostic commands

# Test everything
dbt debug

# Parse errors only (no warehouse needed)
dbt parse

# Which models errored last run
cat target/run_results.json | \
  jq -r '.results[] | select(.status=="error") | .unique_id'

# Blast radius of a broken model
dbt ls --select <model>+ --output name | wc -l

# What changed vs. prod
dbt ls --select state:modified --state ./prod-manifest/

See also