Data quality and incident signals on dashboards: incident modal and per-chart curtains

Baseline Apache Superset has no incident model — when an upstream pipeline goes bad, dashboards still render the broken numbers and analysts have no in-product way to know. Our fork wires the dashboard into the data platform's data quality surface: opening a dashboard with an open incident pops a Data quality notice modal, the dashboard header carries an orange Data invalid badge with the affected-chart count, and every chart whose dataset is in scope wears a dismissible curtain.

Side-by-side

See the difference

This capability has no Apache Superset 4.1 equivalent — there's nothing to slide against. Open the fork screenshot for a pixel-level look.

Our forkDrafted fork

Guided tour

How the incident modal greets the analyst on dashboard open

Hover or tap a pin to see how each block of the modal maps to a real analyst question. The modal is intentionally not a toast — it's a blocking explanation of which charts on the dashboard are unsafe to read until the upstream incident clears.

Drafted fork

Per-chart curtains

How affected charts warn the analyst on the dashboard itself

After the modal is dismissed, the dashboard stays the source of truth for which slice is risky. Affected charts wear a dismissible curtain with a View anyway out, while charts on healthy datasets render normally. The signal is per-chart, per-user, and reversible — no global blackout, no permanent block, no silent dismissal for the next analyst who opens the board.

Our forkDrafted fork

Inside the dashboard

How the per-chart curtain is built

Each annotation explains how a single chart on the dashboard reacts to its dataset being in an incident. Healthy and unhealthy charts coexist in the same dashboard — the treatment is scoped to the dataset, not to the dashboard.

Drafted fork

Context

Why this matters

When the data platform reports an active incident against any dataset a dashboard depends on, opening that dashboard auto-pops a Data quality notice modal. The modal isn't a generic warning — it spells out the incident date and lists, by name, every chart on the dashboard that draws from an affected dataset, so the analyst sees the exact bad scope before they touch a number. The modal is dismissible (a single X in the corner), but the signal isn't: the dashboard header keeps an orange Data invalid: N charts badge so the analyst can re-open the modal at any time and never forget the dashboard is operating in degraded mode.

After the modal is dismissed, the dashboard itself stays the source of truth for which slice is risky. Every affected chart renders with a curtain — a thin warning that says Data may be incorrect with a View anyway button on most tiles, and a longer-form variant on more sensitive ones (This chart may contain incorrect data because the dataset has an active incident). The curtain is per-chart and per-user: closing it on one chart reveals the data underneath without silently dismissing the warning for everyone else, and on the next dashboard open the full signal returns. Charts whose datasets are not in the incident scope render normally — there is no global blackout of the dashboard, no all-or-nothing block.

This entire surface is wired to the incident model in our internal data quality / observability platform — Superset listens for active incidents against the datasets a dashboard depends on, and the modal plus curtain treatment activates only while at least one underlying dataset has an open incident. When the incident clears upstream, the badge and curtains disappear on the next dashboard load. Stock Apache Superset has no incident model, no per-dataset DQ signal, and no chart-level overlay — analysts have no in-product way to know that today's numbers are based on a known-bad pipeline run.

Related features

Capabilities that usually ship alongside this one. Package tags tell you where each feature lives in our delivery plans.