The Databricks CLI speaks to every workspace API surface. In 2026 it is the primary interface for bundle deploys, cluster ops, SQL queries, and file transfers; the web UI is a visualizer.
Install:
brew install databricks # macOS
# or
pip install databricks-cli # any platform
Authenticate:
databricks auth login --host https://your-workspace.cloud.databricks.com
databricks current-user me # confirm
Bundles (most-used surface)
# Scaffold a new bundle
databricks bundle init default-python
# Check everything before deploying
databricks bundle validate [--target dev]
# Deploy to a target
databricks bundle deploy --target dev
# Run a job from the bundle
databricks bundle run <job_name> [--target dev]
# Tear down
databricks bundle destroy --target dev [--dry-run]
# Summarize the bundle
databricks bundle summary --target prod
See the Asset Bundles guide for full coverage.
Workspace + file operations
# List workspace objects
databricks workspace list /Users/your.name
# Export a notebook to your local machine
databricks workspace export /Users/you/notebook.py ./notebook.py
# Import a notebook
databricks workspace import ./notebook.py /Users/you/notebook.py --language PYTHON
# Volume file ops (UC volumes)
databricks fs cp ./init.sh dbfs:/Volumes/prod/shared/init-scripts/install_deps.sh
databricks fs ls dbfs:/Volumes/prod/shared/
Clusters (all-purpose)
# List clusters
databricks clusters list
# Create a cluster from JSON spec
databricks clusters create --json @cluster.json
# Restart
databricks clusters restart <cluster-id>
# Terminate (stops but keeps the config)
databricks clusters delete <cluster-id>
# Permanently delete config
databricks clusters permanent-delete <cluster-id>
# Get event log
databricks clusters events --cluster-id <id>
For job clusters you rarely use these directly; bundle-deployed jobs bring their own cluster config.
SQL warehouses
# List
databricks sql warehouses list
# Inspect one
databricks sql warehouses get <warehouse-id>
# Edit config (JSON patch)
databricks sql warehouses edit <warehouse-id> --json '{
"size": "MEDIUM",
"min_num_clusters": 1,
"max_num_clusters": 5,
"auto_stop_mins": 5,
"enable_photon": true,
"enable_serverless_compute": true
}'
# Start / stop
databricks sql warehouses start <warehouse-id>
databricks sql warehouses stop <warehouse-id>
Jobs
Most Causeway jobs are bundle-defined; you rarely create jobs by CLI. Useful lookups:
# List jobs visible to you
databricks jobs list
# Get a job's definition
databricks jobs get <job-id>
# Trigger a run
databricks jobs run-now <job-id>
# Recent runs for a job
databricks jobs list-runs --job-id <id>
# Get a specific run's result
databricks jobs get-run <run-id>
Pipelines (Lakeflow Declarative Pipelines)
# List
databricks pipelines list-pipelines
# Get one
databricks pipelines get <pipeline-id>
# Trigger an update (with optional full-refresh)
databricks pipelines start-update <pipeline-id> \
[--full-refresh] \
[--full-refresh-selection silver_customers,silver_orders]
# List recent updates
databricks pipelines list-updates <pipeline-id>
# Get one update's details
databricks pipelines get-update <pipeline-id> --update-id <update-id>
Unity Catalog
# List catalogs / schemas / tables
databricks catalogs list
databricks schemas list --catalog-name prod
databricks tables list --catalog-name prod --schema-name silver
# Describe a table
databricks tables get prod.silver.customers
# Grants on a securable
databricks grants get --securable-type TABLE --full-name prod.silver.customers
# Apply a grant via JSON
databricks grants update --securable-type TABLE --full-name prod.silver.customers \
--json '{"changes":[{"principal":"data-engineering","add":["SELECT"]}]}'
Secrets
# List scopes
databricks secrets list-scopes
# Create a scope
databricks secrets create-scope <scope-name>
# Put a secret
databricks secrets put-secret <scope-name> <key> --string-value "<value>"
# Read
databricks secrets get-secret <scope-name> <key>
Secrets in scopes are the correct home for API tokens, database passwords, and cloud credentials. Reference them from notebooks and jobs with dbutils.secrets.get(scope, key); reference them from bundles with ${secrets.<scope>.<key>}.
Lakebase
# List instances
databricks lakebase list-instances
# Get one
databricks lakebase get-instance <instance-id>
# Create
databricks lakebase create-instance --json @instance.json
# Branch / restore (Autoscaling generation)
databricks lakebase create-branch --source prod-serving --name pr-42
databricks lakebase restore --at 2026-04-20T03:15:00Z
Auth + config
# List configured profiles
databricks auth profiles
# Change active profile
databricks auth login --host https://... --profile <profile-name>
# Switch context per command
databricks --profile staging clusters list
# Show config file path
databricks auth describe
Profiles live in ~/.databrickscfg. One profile per workspace is the default; workload identity federation (OIDC) replaces the token flow for CI.
Convention: --json
Most mutating commands accept a JSON spec rather than a long flag list:
# Via a file
databricks clusters create --json @cluster.json
# Inline (prefer file for anything non-trivial)
databricks sql warehouses edit <id> --json '{"size": "MEDIUM"}'
Keep JSON specs in Git next to the bundle they relate to, not as one-liners in README snippets.
Output formats
# Default: table
databricks jobs list
# JSON (parseable)
databricks jobs list --output JSON
# Specific column
databricks jobs list --output JSON | jq '.jobs[].settings.name'
Debugging
# Verbose logs
databricks --debug jobs list
# Show the HTTP request being made
databricks --log-level DEBUG jobs list
# Show the CLI's config
databricks auth describe
See also
- Asset Bundles guide — the command the CLI spends most of its time serving.
- Compute types — picking the right warehouse or cluster config.
- Common errors — when CLI commands return non-obvious errors.