dbt has about two dozen subcommands. Five of them do most of the work.

Daily commands

dbt build

Runs models, then runs their tests, in dependency order. If a test fails, downstream models do not run. This is the correct command for CI and prod; prefer it over dbt run + dbt test.

dbt build                         # everything
dbt build --select fct_orders     # one model + its tests
dbt build --select marts.*        # all models in marts/
dbt build --select state:modified+ --defer --state ./prod-manifest/

dbt run

Runs models only, no tests. Use when you are iterating and do not need the test signal yet.

dbt run --select stg_customers              # one model
dbt run --select +fct_orders                # upstream of fct_orders
dbt run --select fct_orders+                # downstream of fct_orders
dbt run --select tag:critical               # tagged models
dbt run --select fct_orders --full-refresh  # rebuild from scratch

dbt test

Runs tests without re-running models. Use when you already have fresh data and want to verify quality.

dbt test --select stg_customers             # tests on one model
dbt test --select test_type:unit            # only unit tests
dbt test --select test_type:generic         # only not_null/unique/etc.
dbt test --store-failures                   # write failing rows to audit schema

dbt compile

Resolves Jinja, expands ref()/source(), emits SQL under target/compiled/. Does not execute anything. Fast; use it to see exactly what Databricks will run.

dbt compile --select fct_revenue
cat target/compiled/my_project/models/marts/finance/fct_revenue.sql

dbt source freshness

Checks that source tables have been updated within their declared thresholds. Emits warnings and errors per source.

dbt source freshness
dbt source freshness --select source:bronze.raw_transactions

Selectors

dbt's selector syntax is more powerful than it looks. The prefix characters modify the selection:

SyntaxMeaning
model_nameJust that model
+model_nameThat model and everything upstream
model_name+That model and everything downstream
+model_name+That model and everything upstream and downstream
staging.*All models in models/staging/
tag:dailyAll models tagged daily
path:models/martsAll models under a path
config.materialized:incrementalAll incremental models
state:modified+Modified vs. stored manifest, plus downstream
result:errorModels that errored in the last run (requires --state target/)
test_type:unitOnly unit tests

Combine with commas (union) and spaces (intersection):

dbt build --select tag:daily staging.*            # intersection
dbt build --select tag:daily,staging.*            # union

Exclude with --exclude:

dbt build --select marts.* --exclude fct_legacy

State-based commands

state: selectors compare the current project against a reference manifest.json, usually prod's. Ship CI on top of these.

# Install manifest first (from S3 / ADLS / GCS)
aws s3 cp s3://dbt-artifacts/prod/manifest.json ./prod-manifest/

# Build only what changed
dbt build \
  --select state:modified+ \
  --defer \
  --state ./prod-manifest/ \
  --favor-state

Two flags work with state::

See the Slim CI guide for the full CI pipeline.

Less common but useful

dbt debug

Confirms that your profile parses, the warehouse connection works, and required packages are present. First thing to run on a new machine.

dbt debug

dbt deps

Installs packages listed in packages.yml. Run after cloning the repo and whenever packages.yml changes.

dbt deps
dbt deps --upgrade   # resolve to latest compatible versions

dbt ls

Lists resources without running anything. Useful for counting models, checking what a selector matches, or piping node names into other tools.

dbt ls --select +fct_revenue --output name  # one name per line
dbt ls --resource-type model --output json

dbt parse

Parses the project and writes target/manifest.json. Faster than compile because it does not render every model's SQL. Use in CI to pre-generate the manifest before Cosmos reads it.

dbt parse

dbt docs generate

Walks the database for column types, generates target/catalog.json, merges it with the manifest, and produces a browsable static site.

dbt docs generate
dbt docs serve   # local server on http://localhost:8080

Flags that apply everywhere

FlagEffect
--target NAMEWhich profile target to use (dev, prod, ci)
--profiles-dir PATHLocation of profiles.yml
--project-dir PATHRun against a project in a different directory
--vars '{key: value}'Override vars: declared in dbt_project.yml
--threads NParallelism across model runs
--debugVerbose logging including compiled SQL
--full-refreshRebuild incremental models from scratch
--no-partial-parseRe-parse the whole project (disable incremental parse)

Output files under target/

Every dbt invocation emits artifacts:

FileProduced byWhat it is
manifest.jsonAny commandFull DAG + metadata
run_results.jsonrun, test, build, seedResults of the last invocation
catalog.jsondocs generateSchema info from the warehouse
sources.jsonsource freshnessSource freshness results
compiled/<path>.sqlcompile, run, buildThe SQL dbt rendered
run/<path>.sqlrun, buildThe SQL dbt actually submitted
partial_parse.msgpackAny command with partial parse enabledCached parse state

Note

manifest.json is the build output that everything downstream cares about: Slim CI, Cosmos, exposures, lineage tools. Treat it like a compiled artifact — generate in CI, upload to object storage after every successful prod run.

See also