Power BI has three ways to reach Databricks. They look interchangeable in the UI; they are not interchangeable in performance or authoring semantics.

ConnectorDriver2026 status
Native "Azure Databricks" / "Databricks" Power Query connectorChosen automaticallyDefault; right answer almost always
ADBC (Arrow Database Connectivity)New columnar driverDefault under the hood for new connections
ODBC (Simba)Legacy row-orientedStill supported; do not author new against it

The native connector

In Power BI Desktop, Get data → Databricks uses Microsoft's native connector. It handles OAuth, Unity Catalog navigation, partner-connect wiring, and selects the driver for you.

Use it unless you have a specific reason not to.

ADBC

ADBC is the Arrow-native connection layer adopted across the Apache ecosystem. Power BI shipped ADBC support in the Databricks connector in early 2026 and made it the default for new connections. Three gains matter:

Force ADBC explicitly when authoring M code:

Source = Databricks.Catalogs(
    "adb-1234567.azuredatabricks.net",
    "/sql/1.0/warehouses/abc123xyz",
    [Implementation="2.0"]
)

Implementation="2.0" forces ADBC. The same toggle lives in File → Options → Preview features → Databricks ADBC.

Note

ADBC vs. ODBC is a driver-layer concern that a user interacts with at most once per connection. You do not need to teach it to report authors; you do need to ensure the organization's template connections are ADBC before rolling them out.

ODBC (legacy)

Existing reports on Simba ODBC keep working; there is no urgency to migrate them. New authoring should use ADBC.

If a published report slows down noticeably after a Power BI desktop or service upgrade, check the driver first. Occasionally a tenant-wide setting flips a connection back to ODBC; a connection that was fast yesterday is now slow today because it is quietly taking the row-deserialization path again.

The Databricks.Query trap

Custom SQL via Databricks.Query("select * from …") is the first thing seasoned SQL developers reach for. It works in Import mode.

Danger

Databricks.Query is not supported in DirectQuery. If you model against Databricks.Query and later flip a table to DirectQuery, the report does not error at authoring time but queries fail mysteriously at the service. Always model against Databricks.Catalogs(...) and let Power BI generate the SQL. Custom SQL belongs in a Databricks view or dbt model the connector reads like any other table.

Authentication

Three identity flows for Databricks connections:

FlowUse
OAuth (Entra ID)Interactive user sessions, Power BI Desktop authoring
Service principal (M2M OAuth)Power BI service, gateways, scheduled refresh, CI
Personal Access Token (PAT)Banned for new work

Service principals get scoped Unity Catalog grants (USE CATALOG, USE SCHEMA, SELECT). Rotate their credentials on the platform's cadence (automatic with Entra ID app registrations).

Warning

Power BI + Databricks integrations still show PAT examples in older tutorials. The 2026 Causeway policy is service principal + M2M OAuth for every service-to-service connection. PATs belong to individuals who leave; service principals do not.

Gateway considerations

Power BI Service is in Azure; Databricks can be anywhere. When Databricks is reachable over the public internet, Power BI connects directly. When it is not, you need a gateway.

Two gateway kinds:

For the on-premises gateway specifically:

AWS-specific

AWS shops hit a few extras:

Connection quality checklist

Every production semantic model's Databricks connection should satisfy:

See also