Press ESC to exit fullscreen
📖 Lesson ⏱️ 75 minutes

MVP Scoping Under Ambiguity

Picking the first slice that proves value, fits a sprint, and earns you the right to keep building

The scoping moment

You finish week 2 with a problem statement, a stakeholder map, and a draft ontology. The customer is leaning in. The platform is ready. You sit down to decide what to actually build.

This moment is harder than it looks. You have, on your desk:

  • A VP demanding a 14-point lift in on-time delivery
  • A dispatcher with a wishlist of 30 quality-of-life improvements
  • An IT director who wants a unified data layer “while we’re at it”
  • A platform with so much capability that any of the above is plausibly buildable

The temptation is to be brave and pick all of it. Or, just as bad, to be cautious and pick almost none of it. Both fail. The skill is to pick the single smallest slice that, when shipped and adopted, makes the next bigger slice obvious and fundable.

This is MVP scoping in the FDE context, and it has its own rules.

What “MVP” means here

The acronym “MVP” has been worn down by misuse. In an FDE engagement, an MVP has a specific definition:

An MVP is a working system that one real user uses for real work to produce a measurable outcome within one iteration of the deployment loop.

Unpack each clause:

  • Working system — actual deployed software, not a demo. The user can open it without you.
  • One real user — a specific human, named, by week 2. Not “dispatchers” — Maria, the senior dispatcher.
  • Real work — they would do this task whether or not you existed. Your software changes how they do it.
  • Measurable outcome — at the end of the iteration you can answer “did it help?” with a number or a clear yes/no, not “the user said it was nice.”
  • One iteration — one week, two at most. Anything longer than two iterations is not an MVP; it is a project.

This is a tight definition. It is also a useful one — it filters out 80% of bad MVP candidates immediately.

The four filters

Apply these in order. If a candidate fails any one, it is not a viable MVP. Find another.

Filter 1 — One user

Pick one person whose day will change. The MVP exists or fails based on whether that one person uses it.

Why this works: it forces specificity. “The dispatchers will use it” is a hope; “Maria will use it tomorrow at 5:30 AM” is a plan. You can interview Maria, watch Maria, demo to Maria, and measure Maria’s behavior. You can do none of those things with an abstract noun.

Why this is hard: customers will push you to “make it useful for all the dispatchers.” Resist. The first version that delights one user is more valuable than the first version that mildly inconveniences five. Generalization comes after adoption, not before.

Filter 2 — One workflow

Pick one workflow — one sequence of steps the user takes — to support end-to-end. Not three workflows partially. Not one workflow plus “and also a dashboard.”

Why this works: workflows have edges. You either supported it through to the end, or you didn’t. There is no half-credit. This sharpens scope debates dramatically.

Why this is hard: workflows are interconnected. The dispatcher’s morning batch ritual touches load assignment, driver scheduling, GPS, and customer notification — all four. The FDE move is to pick the minimal coherent slice that makes the most fragile step in that workflow visibly better, and leave the others alone for now.

Filter 3 — One week (or two, max)

Pick something you can ship in one loop iteration. If the build looks like two weeks, you have either picked too big a slice or underestimated the difficulty — and both demand scope reduction, not heroics.

Why this works: timing forces honesty. A one-week MVP cannot afford a custom auth flow, a polished UI, or three-system integration. It will have rough edges. Those rough edges are not a bug — they are the point. The MVP exists to learn, not to be admired.

Why this is hard: engineers, especially senior ones, want to ship things they are proud of. You will need to make peace with shipping something visibly imperfect. The customer is paying for outcomes, not artifacts.

Filter 4 — One outcome

Pick one measurable outcome that you and the customer agree counts as “this worked.”

The outcome must be:

  • Concrete — a count, a time, a yes/no, an observed behavior
  • Attributable — caused by your MVP, not by something else that happened that week
  • Quick — measurable within the iteration, not “we’ll see in a quarter”

Examples of good outcomes:

  • “Maria opens the morning view before her 6:45 call to Doug, for 4 of the next 5 mornings.”
  • “Maria stops updating the 6:30 spreadsheet.”
  • “The first ‘load slipping’ alert is acknowledged within 5 minutes.”

Examples of bad outcomes (vague, slow, or unattributable):

  • “On-time delivery improves.” (Slow, unattributable in one week.)
  • “Dispatchers are happier.” (Vague.)
  • “We deliver business value.” (You are not even trying.)

The scoping conversation

Once you have a candidate MVP that passes the four filters, you negotiate it with the customer. This conversation has predictable failure modes.

Almost every sponsor, presented with a sharply scoped MVP, will try to add to it. They are not being unreasonable — they have a number to hit, and your slice may not look big enough to move it.

Your move is not “no.” Your move is “yes, and here’s where that fits.” Lay out a phased plan:

  • Iteration 1 (week 3): the MVP you have scoped
  • Iteration 2 (week 4): the most likely follow-on, conditional on iteration 1 succeeding
  • Iteration 3 (week 5): the next, similarly conditional
  • And so on

Now the sponsor sees that their X is not being dropped — it is sequenced. If iteration 1 succeeds, you build X next. If it fails, the lesson informs what comes next instead. Either way they get progress visibility, and you get scope discipline.

User: “What I really need is Y.”

Users are often more specific and more accurate than sponsors. If your scoped MVP is not what the user actually needs, listen carefully. There is a meaningful difference between:

  • “That isn’t what I need” — re-scope, the user is the boss of the workflow
  • “I’d also love it if…” — defer, add to iteration 2 candidates
  • “I’d rather you fix Y first” — ask why; their priority may be more leveraged than yours

The dispatcher who quietly tells you the GPS portal is wrong about half the time is signaling something more important than the slick view you were planning to build.

Gatekeeper: “We can’t deploy that in one week.”

IT will sometimes (often correctly) tell you that even your one-week MVP cannot deploy to production in a week. Their security review takes 30 days; their change advisory board meets on the 15th; their network access for new services requires the CIO’s signature.

The FDE answer: decouple “build” from “deploy to production.” Your iteration-1 MVP does not need to live in production. It can live on a developer environment that Maria can reach from her laptop. Iteration 1 buys learning; iteration 4 or 5 buys the production cutover. Treat IT’s deployment timeline as part of the engagement plan, not as a blocker on iteration 1.

Anti-patterns to recognize

Scoping mistakes have signatures. Watch for these.

The replatform trap

Symptom: the MVP is “replace SAP with our platform.”

Why it fails: replacing a 30-year-old system is a 5-year project. You will spend month 1 reproducing 40% of SAP’s features and never reach the part where you are better than SAP.

Fix: pick one workflow that touches SAP and improve it. Leave SAP in place as a system of record. Be a layer above, not a replacement.

The data-platform trap

Symptom: the MVP is “build a unified data layer.”

Why it fails: data platforms are infrastructure. They produce no value visible to operators. Six weeks in, the dispatcher’s day is unchanged and the sponsor wonders what they bought.

Fix: pick a user-facing app that incidentally requires the first slice of the data layer. Ship the app. The data layer accretes underneath it as you build more apps.

The everything-at-once trap

Symptom: the MVP has bullet points for five workflows, three user types, and a dashboard.

Why it fails: each component is half-done in week 1. None reaches the “one user uses it for real work” bar. The retro is muddled.

Fix: ruthlessly pick the single most leveraged slice. The other four will still be there in week 4 if iteration 1 succeeds — and if iteration 1 fails, you saved 4 weeks of wasted work on the other four.

The demo-only trap

Symptom: the MVP is built to look good in the Friday demo, but it cannot survive Maria opening it without you.

Why it fails: you optimize for the wrong audience. The sponsor is delighted; the dispatcher never uses it; the engagement coasts to a renewal that never comes.

Fix: install on the user’s machine. Stop showing. Start watching.

The platform-tour trap

Symptom: the MVP shows off the platform’s capabilities — “look, an ontology! look, AI! look, a workflow engine!” — but is not anchored to a problem.

Why it fails: you have built a platform demo, not a customer outcome.

Fix: every screen of the MVP should map to a sentence from a user interview. If you cannot point at a screen and quote the operator who needed it, cut the screen.

The MVP plan artifact

By end of week 2 you produce a one-page MVP plan. The structure:

MVP Plan — Iteration 1 — Week 3

The problem
  [one-paragraph problem statement from discovery]

The user
  [name, role, when and where they will use it]

The workflow
  [the specific sequence of steps the MVP supports, start to finish]

What we will build
  [object types touched, the single screen or action, data sources used]

What we will not build (yet)
  [the explicit "not in scope" list — auth flow polish, multi-user, edits, etc.]

The outcome we will measure
  [the single, specific, observable thing that defines success]

Risks and mitigations
  [3 max — the things that could derail iteration 1]

Next iteration candidates
  [3 things we'd consider for iteration 2, ranked]

That page is what you walk into the week-2 review with. It is what the sponsor signs off on. It is what the team works from on Monday morning of week 3.

A worked Northbound MVP plan

MVP Plan — Iteration 1 — Week 3 of Northbound Engagement

The problem
  Every morning, senior dispatcher Maria spends 45 minutes manually
  consolidating SAP, the GPS portal, and her spreadsheet to identify
  which loads are slipping ETAs by 6:45, when she calls hub manager Doug.

The user
  Maria Hernandez, senior dispatcher.
  Uses it between 6:00 and 7:00 AM, weekdays, at her desk.

The workflow
  1. Maria opens the morning view at ~6:00 AM
  2. The view shows all active loads with planned vs. live ETA
  3. Loads slipping by >15 minutes are highlighted
  4. Maria takes the list of slipping loads to her 6:45 call with Doug

What we will build
  - Object types: Load, Stop, GPSPing (read-only)
  - Datasources: nightly SAP load export, GPS portal hourly poll
  - Single screen: "Morning Dispatch View" — sortable list, slip highlight
  - URL on Maria's laptop, no auth (single-user pilot device)

What we will not build (yet)
  - Editing loads or ETAs
  - Mobile access
  - Multi-user / per-dispatcher views
  - Reassignment workflow (iteration 2)
  - Doug's view (iteration 3)
  - Notifications / alerting (iteration 4)

The outcome we will measure
  Maria opens the Morning Dispatch View before 6:45 AM on at least
  4 of the next 5 weekdays. Bonus signal: she stops manually
  updating the 6:30 spreadsheet.

Risks and mitigations
  1. SAP nightly export is stale by morning.
     → Mitigation: surface a "last updated at" timestamp prominently;
       discuss intra-day refresh with IT in week 3.
  2. GPS portal goes down (Maria reported this).
     → Mitigation: show "GPS unavailable" state explicitly. Do not
       silently show stale ETAs.
  3. Maria is on vacation week 3.
     → Mitigation: confirm her schedule on day 1 of week 3; if she
       is out, switch user to the morning-shift dispatcher Jorge.

Next iteration candidates
  1. Stop reassignment action (typed action via the ontology).
  2. Doug's view at the east hub.
  3. Slip alerts pushed to Maria's phone.

That plan fits on one page, makes commitments specific, and a sponsor, a user, and a gatekeeper can all read it and understand what week 3 looks like.

The hardest scoping decision: when to say no

The single most career-defining scoping moment in an FDE engagement is when a sponsor asks for something you should not build, and you have to say no.

Senior FDEs say no in a particular way:

  1. Reflect the ask back. “If I understand, you want X by week 4 because Y.”
  2. Acknowledge the value. “I can see why Y matters.”
  3. Surface the cost. “Building X in week 4 means we don’t ship the Maria pilot. We’d lose the chance to learn from her first.”
  4. Offer the sequenced alternative. “If iteration 1 lands, we can pull X forward to iteration 3.”
  5. Ask for a decision. “Which path do you want?”

This works because you have given the sponsor what they actually want — agency and visibility — without giving them the bad outcome they asked for. Most sponsors, given that framing, will agree to the sequence.

The ones who do not — who insist on the bad scope despite your push-back — have given you valuable information. Either the politics demand the bad scope (in which case you adapt, with eyes open), or the sponsor is not actually buying outcomes (in which case the engagement is in trouble and you need to escalate inside both organizations).

Living with imperfect scoping

You will scope an MVP, build it, and discover in week 4 that you picked wrong. The slice did not move the needle. The user found a workaround that defeats the point. The data was worse than expected.

This is not failure — this is the deployment loop working. The right response:

  1. Name the miss in the retro. Out loud, in writing.
  2. Update the model and the next problem statement.
  3. Re-scope iteration 2 against the new understanding.
  4. Keep the cost of the miss visible — one week, not six.

The reason you scoped small in the first place is precisely to make the cost of being wrong small. Honor that decision when you find out you were wrong.

Key terms to remember

  • MVP — one user, one workflow, one week, one outcome
  • Filter pass — a candidate MVP must clear all four filters
  • Iteration plan — the sequence of next likely MVPs, surfaced to sponsors for visibility
  • Decouple build from deploy — iteration 1 does not need production
  • The replatform / data-platform / everything / demo-only / platform-tour traps — the five common scoping failures
  • MVP plan — the one-page artifact discovery + capture converges into

What’s next

Phase 2 is complete. You have a problem (from discovery), a model (from capture), and a plan (from scoping). It is now Monday of week 3.

Phase 3 — Technical Foundations — turns to the keyboard work. We start with the messiest part of any FDE engagement: getting data out of the customer’s systems in a usable form. That is data plumbing in the wild.