Airflow 3.0 went GA on April 22, 2025. It is a larger break than any release in the project's history. Assets replaced Datasets. DAG versioning finally arrived. SLAs are gone. The Task SDK opens the door to non-Python tasks.
If you are still on 2.x and considering the upgrade, this is the short version of what changed and what that means for your DAGs.
Net-new capabilities
Task SDK
A stable, versioned interface between DAG code and Airflow internals. Tasks no longer call Airflow APIs directly; they call the SDK.
The immediate consequence: Airflow can run tasks in languages other than Python. Go is the first supported alternative; more are planned. The longer-term consequence: tasks can run in isolated workers with a thinner dependency on the Airflow installation, which matters for security-sensitive environments.
For Python DAG authors, the Task SDK is mostly invisible; the familiar @task and operator patterns still work. The change shows up when you write custom operators: they go through the SDK now instead of importing Airflow internals directly.
DAG versioning
Every DAG run is pinned to a specific version of the DAG code. The UI and API show the historical DAG structure that a past run actually executed.
This closes the "what code ran three weeks ago" gap that every mature team eventually hits. When a failed run needs forensics, you can see the DAG as it was at that moment, not as it is now.
Note
DAG versioning works well only with disciplined deployment. If you redeploy DAGs multiple times per day and change task structure mid-run, the version pinning still captures "what ran" but the versions proliferate. Treat DAG code the same way you treat application code: tagged releases, clear deploy cadence.
Assets (formerly Datasets)
Assets are the first-class data object. The @asset decorator and cleaner event-driven semantics (AIP-74/75) replace the 2.x Dataset API.
from airflow.assets import Asset, AssetWatcher
orders_gold = Asset("s3://data-lake-prod/gold/orders/_delta_log/")
with DAG(
dag_id="refresh_dashboard",
schedule=[orders_gold], # triggers when the asset updates
catchup=False,
) as dag:
...
See dependency types for when to use Assets vs direct dependencies vs event-driven watchers.
AssetWatchers
Classes that monitor external event sources (SQS, Kafka, S3 events, Unity Catalog table updates) and emit Asset updates that trigger DAGs. Event-driven scheduling is no longer bolt-on.
The pattern that replaces polling sensors:
from airflow.providers.amazon.aws.assets import S3AssetWatcher
orders_arrived = AssetWatcher(
name="orders_arrived",
source=S3AssetWatcher(
bucket_name="landing-zone",
key_prefix="orders/",
),
)
What got removed
Plan your migration around these.
SLAs are gone
The SLA mechanism in 2.x was never great: one-shot emails, no retry, no routing. In 3.x it is removed outright.
Replacement patterns:
on_failure_callbackwired to PagerDuty, Slack, or OpsGenie.- Astro Observe's data-quality framework (if you are on Astronomer).
- OpenLineage events for pipeline-level SLAs, consumed by a dedicated alerting system.
Danger
If you relied on sla= arguments and SLA miss emails for production alerting, your upgrade to Airflow 3 will silently remove that alerting. Replace SLAs with callbacks before the upgrade, not after.
SubDAGs are gone
Use TaskGroups. TaskGroups were introduced in 2.x as the successor and are strictly better: no separate DAG run, no executor-slot overhead, no scheduler confusion.
# 2.x SubDAG (gone in 3.x)
# SubDagOperator(task_id=..., subdag=...)
# 3.x TaskGroup (use this)
from airflow.utils.task_group import TaskGroup
with TaskGroup(group_id="ingest") as ingest_group:
extract()
validate()
load()
DAG and XCom pickling
Removed. If you were pickling custom objects through XCom, switch to serializable types (JSON-safe Python primitives) or use a proper backend: store the object in S3 / object storage and pass the key through XCom.
Warning
XCom is metadata storage, not a data bus. Even with pickling available, pushing non-trivial objects through XCom was always a mistake. The 3.x removal just makes the mistake louder.
execution_date and friends
execution_date, tomorrow_ds, yesterday_ds, prev_ds, next_execution_date, and most of their relatives are gone. Use logical_date and derive everything else explicitly with pendulum.
# 2.x
def task_callable(**context):
ds = context["execution_date"].strftime("%Y-%m-%d")
tomorrow = context["tomorrow_ds"]
# 3.x
def task_callable(**context):
logical_date = context["logical_date"]
ds = logical_date.format("YYYY-MM-DD")
tomorrow = logical_date.add(days=1).format("YYYY-MM-DD")
Operators cannot access the metadata DB directly
Custom operators that did session.query(TaskInstance) need to go through the Task SDK. If you have plugins that reached into Airflow internals, they need a rewrite.
xcom_pull by key requires task_ids
# 2.x (worked, ambiguously)
value = ti.xcom_pull(key="foo")
# 3.x (required)
value = ti.xcom_pull(key="foo", task_ids="source_task")
Pulling an XCom by key without specifying which task produced it was always a bug waiting to happen. The 3.x enforcement catches it at parse time.
The migration playbook
Most DAGs will run with small adjustments. The painful cases are:
- Custom operators that reach into internals. Plan rewrite through the Task SDK.
- DAGs that relied on SLA emails. Redesign alerting via callbacks.
- Code that used
execution_dateacross task boundaries. Switch tological_date+ pendulum derivations. - Plugins that depended on removed hooks. Check plugin repositories for 3.x-compatible versions; rewrite if no upstream fix.
The recommended approach
- Run Astronomer's upgrade-check tooling (or
airflow db check-migrationsfor OSS). It flags DAGs that will fail to parse. - Migrate on a branch, not in prod. Keep prod on 2.x until the branch run-tests clean.
- Run a full DAG parse + test suite on the 3.x branch.
- Test on staging for at least a week. Scheduler behavior is the thing that surprises you; DAG parse is the easy part.
- Promote to prod during a low-traffic window with rollback ready.
Provider version alignment
Warning
Airflow core 3.x requires provider packages that support the Task SDK. The Amazon, Google, Databricks, and Microsoft providers all shipped 3.x-compatible versions in 2025, but some smaller provider packages lag. Pin provider versions explicitly and bump them with core in a coordinated release.
The practical impact
For a team shipping DAGs weekly, the changes that matter most in practice:
- DAG versioning is a free win: forensics get better with no DAG rewrite needed.
- Assets + AssetWatchers genuinely improve event-driven DAGs; worth adopting even if your existing DAGs do not need it.
- SLA removal is the item that silently breaks alerting if you miss it in migration; test callbacks fire as expected.
- Task SDK matters to custom-operator authors; most DAG authors will never notice it.
The rest is paper cuts that the upgrade-check tooling flags and surgical fixes resolve.
See also
- Dependency types — Assets, AssetWatchers, and direct dependencies.
- Event-driven DAGs guide — AssetWatcher recipes.
- Production readiness — the items on the checklist that changed for 3.x.