Course Content
Dashboards, Reports, and Operator UX
What operators need to see at a glance, what they need to act on, and what they will quietly stop using
Why display surfaces fail
Most customer dashboards are dead within 90 days of being built. They get demoed, they get bookmarked, and then they go unopened. By month three, the only person looking at the dashboard is the person who built it, and even they only check it before quarterly reviews.
This is the default outcome for display surfaces, and it has a specific set of causes:
- They were designed to show data, not to drive a decision
- They answer questions nobody asks, or questions the user already had a better answer for
- They take too long to load or are too far from the workflow
- They lose trust because operators can’t tell when data is stale
- They are not maintained as the underlying ontology changes
Avoiding this outcome is what this lesson is about. The goal is dashboards that survive — opened by someone every day, because they make a decision better.
Three audiences, three surfaces
Display surfaces in an FDE deployment serve three distinct audiences. Trying to serve all three with one surface is the most common reason dashboards fail.
| Audience | What they need | Surface |
|---|---|---|
| Executive | A few numbers, in context, at a glance | Executive dashboard |
| Analyst | Flexible exploration, exports, deep dives | Analyst workbench |
| Operator | Real-time situational awareness while working | Operational dashboard |
Each has a different cadence (executive: weekly; analyst: ad-hoc; operator: live), a different density (executive: sparse; analyst: dense; operator: dense), and a different relationship to the underlying ontology (executive: read aggregates; analyst: read everything; operator: read + drive actions).
Design for one. Don’t pretend one surface serves all three.
Designing for the executive
The VP of Operations at Northbound — your engagement’s sponsor — wants to walk into a Monday morning meeting and know:
- On-time delivery: where is it this week vs. last week vs. last quarter?
- Where is the deviation coming from? (One hub? One customer? One driver pool?)
- What’s the trend? (Will the quarter’s target be hit?)
- What is being done about it?
A great executive dashboard for her shows exactly that, in five tiles, with the cause and the action one click away.
┌──────────────────────────────────────────────────────────────────────────┐
│ ON-TIME DELIVERY Week ending 2026-05-12 │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ ╔═══════╗ ON-TIME Last week 84.1% │
│ ║ 86.2% ║ +2.1% Quarter target 92.0% │
│ ╚═══════╝ Pace: 89.4% by quarter end │
│ │
│ By hub: │
│ Chicago 91.4% ▲ +0.8 │
│ East (Doug) 78.1% ▲ +5.6 ← driving the lift │
│ Detroit 88.2% ▽ -1.1 │
│ St. Louis 85.0% ▲ +1.2 │
│ │
│ Top contributors to misses this week: │
│ Driver shortages (East) 6 missed loads │
│ Weather delays (Detroit lane) 4 missed loads │
│ Customer pickup delays (Acme) 3 missed loads │
│ │
│ In progress: dispatch reassignment workflow now live for Maria. │
│ Iteration 4 candidates: slip alerting; Doug's hub view. │
│ │
│ [Open detailed view] [Last refreshed: 06:00 today] │
└──────────────────────────────────────────────────────────────────────────┘What this gets right:
- One screen. No tabs, no menus, no “click here for more.”
- The headline number is large and labeled with context (this week vs. last week vs. target).
- Where the deviation is is on the same screen, not behind a drill-down.
- What is being done about it is on the same screen — a literal status line.
- The freshness timestamp is visible so the VP knows when this was last computed.
What it deliberately does not try to be:
- Interactive exploration (that’s the analyst workbench)
- Real-time (executive cadences are weekly)
- A replacement for an operational tool (the VP does not dispatch loads)
Rules for executive surfaces
- One screen per audience. If you can’t fit it on one screen, you have not converged on the right summary yet.
- Numbers in context. Every metric is paired with last-period, target, and trajectory.
- Cause adjacent to effect. “On-time is 86%” is half a sentence; “and the lift is driven by East hub recovering” finishes it.
- Action on the same surface. What is being done about the gap — surface it.
- Refresh cadence matches use cadence. Weekly meeting → weekly snapshot, with explicit timestamp. Stop the customer’s executive from thinking they’re looking at live data when they aren’t.
Designing for the analyst
The analyst at Northbound — let’s call him Raj, the senior ops analyst — needs:
- To answer ad-hoc questions (“which customers cause the most slippage?”)
- To export data for further analysis (Excel, R, Python)
- To save and share views
- To compose queries across the ontology
- To handle dimensions and time ranges flexibly
His surface is fundamentally different from the executive’s. It is dense, interactive, exploratory.
Designing for Raj:
- Start from the ontology. His workbench is a typed query interface against the semantic layer, not a hand-built dashboard. He picks an object type, filters, groups, aggregates.
- Saved views are the unit of work. Raj’s “weekly customer slippage analysis” is a saved query he opens each week. Treat it as first-class.
- Exports are not an afterthought. CSV, XLSX, Parquet — analysts need to take data out. Build the exports cleanly; do not require workarounds.
- Show units, types, and sources. Every field in a workbench result is labeled with its unit and its underlying source. Analysts make mistakes when they don’t know what they’re looking at.
- Document the data dictionary. Raj should be able to click any column and see “this is
Load.live_eta_delta_min, refreshed every 5 minutes from the GPS feed, can be NULL when GPS is stale.” If he can’t, he doesn’t trust the data.
The analyst’s surface is what makes the customer self-sufficient. Done well, Raj can answer 80% of executive questions himself by month two, freeing you to keep building.
Designing for the operator
The operator surfaces — Maria’s morning view, Doug’s hub view, the slip-alert panel — were covered in Building Operational Applications. Brief reprise of the principles, recast as display rules:
- Density tuned for power users in 8-hour sessions
- Freshness visible on every value
- Source affordance for every value
- Workflow-shaped, not type-shaped
- Errors and stale states designed explicitly
An operator dashboard is just an operational app with fewer write actions. Same rules apply.
The trust budget
Every display surface starts with a trust budget — the willingness of the audience to act on what it shows. Each surface starts the budget around 50/100. Every visible mistake costs points. Run out, and the audience stops opening it.
Things that drain trust:
- Showing stale data without saying so
- Numbers that don’t match what another system says, with no explanation
- “Total” rows that don’t add up to the sum of the sub-rows
- Numbers that change between page loads with no time having passed
- Charts whose axes silently changed scale
- Tooltips that disagree with the cell value
Things that build trust:
- Source and freshness on every value, visibly
- A “where this number came from” link on every aggregate
- Footnotes explaining definitions (what counts as “on-time”?)
- Versioned definitions — when “on-time” changes, the dashboard says so
- A visible last-refreshed timestamp at the top of every surface
Trust, once lost, is hard to rebuild. Many engagements have a story of “we shipped a dashboard, it had a bug, the VP saw the wrong number, she’s never opened it since.” Don’t be that engagement.
The four traps that kill dashboards
Specific failure modes worth naming.
Trap 1 — The art project
Symptom: stunning visualization. Custom D3. Chord diagram of customer flows. Three different chart libraries on one page.
Why it fails: the audience cannot quickly read it. They open it, can’t decode it in 5 seconds, and don’t come back.
Fix: bar charts, line charts, tables, sparklines. Boring works. Reach for fancier visualizations only when boring genuinely fails to communicate (rare).
Trap 2 — The everything dashboard
Symptom: 24 tiles, 6 tabs, every metric anyone ever asked about, all on one URL.
Why it fails: nothing is the headline. The user opens it, scrolls, doesn’t know where to focus, leaves.
Fix: one screen, five tiles, the rest a click away. If a tile is on the front page, it has a story. If it doesn’t, demote it.
Trap 3 — The vanity dashboard
Symptom: “engagement metrics” — number of users, number of clicks, number of actions. The dashboard exists to justify the engagement, not to drive a decision.
Why it fails: nobody actually decides anything from it. It dies once the renewal is signed.
Fix: every dashboard you ship must be tied to a decision a specific role makes. If you can’t name the decision, don’t ship the dashboard.
Trap 4 — The unowned dashboard
Symptom: you shipped a dashboard in iteration 3; the ontology changed in iteration 5; nobody updated the dashboard; it now shows wrong numbers; nobody noticed for 4 weeks.
Why it fails: dashboards rot. Ontologies evolve. Without an owner, the dashboard drifts out of step with reality.
Fix: every dashboard has a named owner (a customer person, ideally). Every dashboard has a definition of “this still works correctly” — a quick check that an FDE or owner runs monthly. Every ontology change includes a pass over the dashboards that reference the changed fields.
Reports that get read
Reports are dashboards that get sent, not pulled. Most reports go unread for the same reasons dashboards go unopened — too long, too generic, too late, too noisy.
A report that gets read:
- Arrives at a time tied to a decision (Sunday night → Monday meeting, Friday afternoon → weekly retro)
- Has a headline in the email subject and the first sentence (“On-time delivery: 86.2%, up 2.1pts week-over-week; pace to 89.4% by quarter end”)
- Contains 3-5 numbers, in context, with the trend
- Has one or two “needs attention” callouts — specific, actionable
- Links back to the live surface for those who want detail
Email is the carrier most of the time. Slack works for shorter operational reports. Don’t over-engineer; the writing matters more than the delivery channel.
A weekly Northbound executive report:
Subject: Northbound — week ending May 12
On-time delivery: 86.2% (▲ +2.1 from last week, quarter target 92.0%).
On pace for 89.4% by quarter end.
Highlights:
• East hub (Doug): 78.1%, ▲ +5.6 — reassignment workflow is helping.
• Detroit lane: 88.2%, ▽ -1.1 — weather; should recover next week.
Needs attention:
• Acme Corp: 3 customer-caused pickup delays this week. Recommend
flagging to their account manager Thursday.
What we shipped this week:
• Reassignment workflow live for Maria; Doug's hub view in test.
Full data: northbound.platform/execLess than 100 words. Reads in 30 seconds. Decisions surface. Live data is one click away.
Embedding into where people work
The right place for a display surface is often not “a new URL the customer learns to bookmark.” It is where the audience already spends their time.
- The executive may not visit your dashboard, but they read email and Slack. Send the report there.
- The analyst lives in Excel and a notebook. Make exports first-class so their workbench is your workbench.
- The operator lives in your operational app. The dashboard is a tile inside that app, not a separate site.
Embedding has a cost — engineering effort, surface duplication — but the cost of “people don’t open the standalone dashboard” is higher.
Latency budgets, restated
A reminder from the operational apps lesson: every surface has a latency budget. Display surfaces are no exception.
- Operational dashboard: p50 under 500ms, p99 under 2s
- Executive dashboard: p50 under 1s, p99 under 3s — slower OK because cadence is weekly
- Analyst workbench query: p50 under 3s, p99 under 15s — pulled queries; analyst tolerates more
- Report generation: under 30s — runs in the background; nobody is waiting
Surfaces that miss their budget get used less. Instrument latency from day one.
The pre-launch checklist for any display surface
Before you ship a dashboard, report, or workbench view, run this:
- The audience is specific — one role, one cadence, one decision
- The decision the surface drives is articulable in one sentence
- Headline metric is on screen without scrolling
- Every value shows its source and freshness
- Definitions of derived metrics are footnoted or one-click away
- Surface degrades gracefully when a data source is stale (named state, not silent NULL)
- Latency is within budget under realistic load
- A named owner is assigned, and they have agreed
- Export and embed paths exist where the audience needs them
- A “this surface still works” check is documented for the owner
Eleven items. Saves you the dashboard graveyard.
Closing Phase 4
By the end of Phase 4 the Northbound system has:
- An operational app for Maria (the morning view), Doug (the hub view), and the dispatch team
- A reassignment workflow with a typed action and an audit trail
- An agent that produces a morning brief at 5:45 AM
- An executive dashboard the VP opens before her Monday meeting
- An analyst workbench Raj uses to answer ad-hoc questions
- A weekly report that lands in the VP’s inbox Sunday at 9 PM
Five surfaces. Each scoped to one audience. Each opened by the right person at the right cadence. Each backed by the same semantic layer, so a change to the ontology in one place updates them all.
This is the operational asset Phase 5 will deploy to production. Everything we’ve built has been on a developer environment Maria reaches from her laptop. The next phase is the cutover.
Key terms to remember
- Three audiences — executive, analyst, operator — each with a different surface
- Trust budget — the audience’s willingness to act on what the surface shows
- The four traps — art project, everything dashboard, vanity dashboard, unowned dashboard
- Embedding — meeting the audience where they already work, not at a new URL
- Owner — every dashboard has one; without one, it rots
- Latency budget — explicit per-surface, instrumented from day one
What’s next
Phase 4 was about building. Phase 5 is about shipping. The next lesson — Deploying to Production at the Customer — walks through the cutover plan that takes Maria from “using the new view on her laptop” to “the platform is the system of record.” This is where most projects find out whether the work to date was real.