Causeway Platform · Status

Partial degradation in us-west-2.

Last updated 2026-04-21 00:14 UTC · auto-refreshes every 30s
Open incidents
1
Autobahn onramp, investigating
Days since major outage
47
Last major: 2026-03-04
30-day uptime
99.92%
Aggregate across all Causeway services
90-day incidents
3
One major, two minor
Our services

Causeway Platform

The applications we operate. Probed from inside.
Data platform partners

Upstream dependencies

Mirrored from each vendor's public status feed. Updated every minute.
Foundation

AWS infrastructure

Mirrored from AWS Health Dashboard. Updated every minute.
Operational Degraded Incident Maintenance
Recent events

Incidents, last 30 days

Newest first. Resolved items collapse by default.
Investigating: elevated latency on Autobahn onramp in us-west-2
DegradedStarted 2026-04-20 17:42 UTC · 32 min agoAutobahnEKS us-west-2
2026-04-20

18:14 UTC
Monitoring
Pod restart has reduced P95 latency from 8.2s to 1.4s. Monitoring for regression before declaring resolved. No data loss reported.
2026-04-20

17:58 UTC
Identified
Root cause: stuck worker pool in the onramp service after a dependency upgrade. Rolling the deployment back in us-west-2 while us-east-1 and eu-west-1 absorb the traffic.
2026-04-20

17:42 UTC
Investigating
Alerting picked up elevated P95 latency on the Autobahn onramp submission endpoint, us-west-2 only. Active applicants in the region may see the form take longer than usual or return a 504. Retries are safe; no submissions have been lost.
Resolved: Astronomer us-east-1 schedule delays affected Catalog refresh
Resolved2026-04-14 09:12 – 11:38 UTC · 2h 26mAstronomerCatalog
2026-04-14

11:38 UTC
Resolved
Astronomer confirmed recovery at 11:12 UTC. Catalog refresh backlog has drained; metadata is current as of the most recent scheduled interval. Back on the paved path.
2026-04-14

10:05 UTC
Monitoring
Astronomer scheduler has resumed in us-east-1. Causeway Catalog refresh queue is draining at expected throughput.
2026-04-14

09:30 UTC
Identified
Upstream issue in Astronomer us-east-1. Catalog metadata may lag by up to 45 minutes until the scheduler recovers. No data loss; refresh jobs resume automatically on upstream recovery.
2026-04-14

09:12 UTC
Investigating
Catalog refresh jobs are not completing on schedule. Investigating upstream dependency health.
Resolved: Contract lint timeout on large schemas
Resolved2026-03-29 14:08 – 14:44 UTC · 36mContracts
2026-03-29

14:44 UTC
Resolved
Timeout increased from 30s to 90s and lint worker scaling fixed. Five affected schemas re-processed successfully. Issue root-caused to a cold-start path in the YAML parser; permanent fix tracked in RFD 0012 follow-up.
2026-03-29

14:20 UTC
Identified
Contracts with over 200 columns time out at the default 30s lint budget. Scaling the lint worker pool and raising the budget.
2026-03-29

14:08 UTC
Investigating
Several creators report causeway lint hanging or returning a timeout error on large contract files.

Get notified when status changes.

We send a short note when an incident opens, when the state changes, and when it resolves. No marketing.

RSS + API
API: GET /api/status.jsonRSS: /status.xmlHistory: Full uptime archive
Source of truth: our internal probes plus Databricks, Astronomer, AWS.