dbt has about two dozen subcommands. Five of them do most of the work.
Daily commands
dbt build
Runs models, then runs their tests, in dependency order. If a test fails, downstream models do not run. This is the correct command for CI and prod; prefer it over dbt run + dbt test.
dbt build # everything
dbt build --select fct_orders # one model + its tests
dbt build --select marts.* # all models in marts/
dbt build --select state:modified+ --defer --state ./prod-manifest/
dbt run
Runs models only, no tests. Use when you are iterating and do not need the test signal yet.
dbt run --select stg_customers # one model
dbt run --select +fct_orders # upstream of fct_orders
dbt run --select fct_orders+ # downstream of fct_orders
dbt run --select tag:critical # tagged models
dbt run --select fct_orders --full-refresh # rebuild from scratch
dbt test
Runs tests without re-running models. Use when you already have fresh data and want to verify quality.
dbt test --select stg_customers # tests on one model
dbt test --select test_type:unit # only unit tests
dbt test --select test_type:generic # only not_null/unique/etc.
dbt test --store-failures # write failing rows to audit schema
dbt compile
Resolves Jinja, expands ref()/source(), emits SQL under target/compiled/. Does not execute anything. Fast; use it to see exactly what Databricks will run.
dbt compile --select fct_revenue
cat target/compiled/my_project/models/marts/finance/fct_revenue.sql
dbt source freshness
Checks that source tables have been updated within their declared thresholds. Emits warnings and errors per source.
dbt source freshness
dbt source freshness --select source:bronze.raw_transactions
Selectors
dbt's selector syntax is more powerful than it looks. The prefix characters modify the selection:
| Syntax | Meaning |
|---|---|
model_name | Just that model |
+model_name | That model and everything upstream |
model_name+ | That model and everything downstream |
+model_name+ | That model and everything upstream and downstream |
staging.* | All models in models/staging/ |
tag:daily | All models tagged daily |
path:models/marts | All models under a path |
config.materialized:incremental | All incremental models |
state:modified+ | Modified vs. stored manifest, plus downstream |
result:error | Models that errored in the last run (requires --state target/) |
test_type:unit | Only unit tests |
Combine with commas (union) and spaces (intersection):
dbt build --select tag:daily staging.* # intersection
dbt build --select tag:daily,staging.* # union
Exclude with --exclude:
dbt build --select marts.* --exclude fct_legacy
State-based commands
state: selectors compare the current project against a reference manifest.json, usually prod's. Ship CI on top of these.
# Install manifest first (from S3 / ADLS / GCS)
aws s3 cp s3://dbt-artifacts/prod/manifest.json ./prod-manifest/
# Build only what changed
dbt build \
--select state:modified+ \
--defer \
--state ./prod-manifest/ \
--favor-state
Two flags work with state::
--defer— when a ref points at an unmodified upstream, read prod's table instead of rebuilding.--favor-state— when the dev schema has a stale copy of a model, prefer the state reference.
See the Slim CI guide for the full CI pipeline.
Less common but useful
dbt debug
Confirms that your profile parses, the warehouse connection works, and required packages are present. First thing to run on a new machine.
dbt debug
dbt deps
Installs packages listed in packages.yml. Run after cloning the repo and whenever packages.yml changes.
dbt deps
dbt deps --upgrade # resolve to latest compatible versions
dbt ls
Lists resources without running anything. Useful for counting models, checking what a selector matches, or piping node names into other tools.
dbt ls --select +fct_revenue --output name # one name per line
dbt ls --resource-type model --output json
dbt parse
Parses the project and writes target/manifest.json. Faster than compile because it does not render every model's SQL. Use in CI to pre-generate the manifest before Cosmos reads it.
dbt parse
dbt docs generate
Walks the database for column types, generates target/catalog.json, merges it with the manifest, and produces a browsable static site.
dbt docs generate
dbt docs serve # local server on http://localhost:8080
Flags that apply everywhere
| Flag | Effect |
|---|---|
--target NAME | Which profile target to use (dev, prod, ci) |
--profiles-dir PATH | Location of profiles.yml |
--project-dir PATH | Run against a project in a different directory |
--vars '{key: value}' | Override vars: declared in dbt_project.yml |
--threads N | Parallelism across model runs |
--debug | Verbose logging including compiled SQL |
--full-refresh | Rebuild incremental models from scratch |
--no-partial-parse | Re-parse the whole project (disable incremental parse) |
Output files under target/
Every dbt invocation emits artifacts:
| File | Produced by | What it is |
|---|---|---|
manifest.json | Any command | Full DAG + metadata |
run_results.json | run, test, build, seed | Results of the last invocation |
catalog.json | docs generate | Schema info from the warehouse |
sources.json | source freshness | Source freshness results |
compiled/<path>.sql | compile, run, build | The SQL dbt rendered |
run/<path>.sql | run, build | The SQL dbt actually submitted |
partial_parse.msgpack | Any command with partial parse enabled | Cached parse state |
Note
manifest.json is the build output that everything downstream cares about: Slim CI, Cosmos, exposures, lineage tools. Treat it like a compiled artifact — generate in CI, upload to object storage after every successful prod run.
See also
- Slim CI guide — using
state:modified+in CI. - Failure triage — which commands to reach for when a run fails.