Course Content
The FDE Deployment Loop
Discover → prototype → deploy → measure → iterate. The weekly rhythm that produces customer impact in 90 days
Why the loop exists
The FDE engagement model has a peculiar property: at the start, neither you nor the customer can specify what you are going to build. The customer knows they have a problem. You know your platform. Neither side knows yet how the two fit. The only way to find out is to try something, show it, and adjust — and to do this often enough that by the time the engagement ends, both sides have converged on a system that works.
The deployment loop is the operating cadence that produces this convergence. It is to the FDE what the sprint is to a Scrum team — but tuned for a different problem, with a different set of failure modes.
The five stages
A single iteration of the loop has five stages. In a healthy engagement, you complete the whole loop every week, sometimes faster.
┌─────────────┐
│ Discover │ Sit-alongs, interviews, data exploration
└──────┬──────┘
▼
┌─────────────┐
│ Prototype │ Build the smallest thing that could work
└──────┬──────┘
▼
┌─────────────┐
│ Deploy │ Put it in front of a real user with real data
└──────┬──────┘
▼
┌─────────────┐
│ Measure │ Did they use it? Did it help? What broke?
└──────┬──────┘
▼
┌─────────────┐
│ Iterate │ Back to the top with what you learned
└─────────────┘Each stage has a different cadence, a different artifact, and a different failure mode.
1. Discover
Goal: Identify the single most valuable, smallest thing you could build next.
Artifact: A one-paragraph problem statement and a sketch of the solution shape.
How long: 1–3 days in the first loops; hours in later loops.
Discovery is not “read the spec.” It is sit-alongs, interviews, walking the floor, and reading data. The output is not a document — it is a decision: this week, we will try to solve this problem with this approach.
A good discovery output looks like:
“Dispatchers spend 20 minutes every morning copying loads from the SAP screen into a shared spreadsheet. If we replace the spreadsheet with a live view of the SAP data, we save them 20 minutes a day and we remove the largest source of stale data downstream. We’ll prototype the live view this week, backed by the daily SAP export, with no editing yet.”
Specific. Bounded. Tied to an observed pain. Has a measurable outcome.
Failure mode: perpetual discovery. You stay in discovery for weeks because “we don’t have enough information yet.” You never have enough information. Move on.
2. Prototype
Goal: Build the smallest thing that could plausibly solve the problem.
Artifact: Working software, however ugly.
How long: 1–3 days.
The prototype is not pretty. It does not handle edge cases. It does not have proper auth. It may be wired to a CSV instead of the real datasource. None of this matters yet. What matters is that a human can sit in front of it, click around, and form an opinion.
A good FDE prototype:
- Uses real (or realistic) customer data, not Lorem Ipsum
- Solves one workflow, not three
- Has at most one screen
- Is built fast enough that you would not feel bad throwing it away
Failure mode: engineering perfectionism. You spend three days getting the production-grade auth flow right before the customer ever sees the prototype. Don’t.
3. Deploy
Goal: Put the prototype in the hands of a real user.
Artifact: A working URL (or installed app) that an operator has clicked.
How long: Hours, ideally.
“Deploy” in this loop is not “deploy to production.” It is deploy far enough that a real customer human can use it. That might mean:
- Hosting it on a developer environment they can reach from their laptop
- Installing it on the customer’s dev box at the desk next to yours
- For air-gapped customers, walking it over on an approved USB drive and demoing live
The key is friction-free access for the user. If they have to install something, configure a VPN, or remember a password, they will not use it on their own, and you will only get reactions in the meeting where you show them — which is much less honest signal.
Failure mode: demo-only deploy. The prototype only ever runs on your laptop, projected once a week. The customer never actually uses it between demos. You learn very little.
4. Measure
Goal: Find out whether the prototype helped, hurt, or got ignored.
Artifact: A short note on what happened. Quantitative if you can, qualitative if not.
How long: A day of observation plus a 30-minute conversation.
What to measure is engagement-specific, but always include:
- Did they open it? (Logs, telemetry, or just asking.)
- Did it replace something? (The spreadsheet they used to use — is it still being updated?)
- Did it cause problems? (New bugs, friction, fear, distrust.)
- What did they ask for next? (The most reliable signal of where to go next.)
Real measurement is often unromantic:
“The dispatcher opened the new view 17 times on Tuesday. Wednesday morning she was back in the spreadsheet because the SAP data was 6 hours old and she didn’t trust it. The lesson: we need to surface the data freshness timestamp prominently and probably move to a more frequent sync.”
That insight — about freshness — is now the next discovery input. The loop has done its job.
Failure mode: vanity metrics. You measure things that look good in your weekly report instead of things that tell you whether the system is working. Operators “viewing” a dashboard is not the same as operators using it to make decisions.
5. Iterate
Goal: Feed what you learned back into the next discovery.
Artifact: A revised problem statement for the next loop.
How long: A short retro at the end of the week.
Iteration is the stage where most engineering teams trip up. They build, they ship, they measure — and then they move on to the next feature without actually integrating what they learned. The signal evaporates.
FDEs hold a brief retro at the end of each loop:
- What did we believe at the start of the week that we no longer believe?
- What did we learn that changes the next thing we should build?
- What in the platform / data / customer org turned out to be different from what we assumed?
The output is a single revised problem statement that goes into next week’s Discover stage. If you cannot articulate one, the loop has not closed and the engagement is drifting.
How the loop scales over an engagement
A typical 6-week FDE engagement runs the loop six times, with shifting emphasis:
| Week | Discover heavy on… | Prototype is… | Deploy means… | Measure focuses on… |
|---|---|---|---|---|
| 1 | Stakeholders, workflows | A throwaway sketch in 1 screen | Demo only | ”Did we name the problem right?“ |
| 2 | The right datasource | An end-to-end thin slice | One user on their machine | ”Does the slice make sense to them?“ |
| 3 | The ontology design | The first real domain model | A small pilot group | ”Does the model fit the work?“ |
| 4 | Workflow gaps | A working operational app | Pilot team uses it daily | ”Are they adopting it?“ |
| 5 | Production readiness | Hardened version of the app | Full pilot rollout | ”Does it hold under real load?“ |
| 6 | Hand-off needs | Runbook + training | Production cutover | ”Can the team operate without us?” |
Notice how Discover never goes away — it just shifts focus. By week 5 the discovery questions are operational, not domain-level. By week 6 they are about hand-off, not features.
The relationship between the loop and the platform
A subtle but important point: the loop only works if the platform you are deploying lets you move at loop speed. If standing up a new object type takes a week of platform engineering, you cannot run a weekly loop. If wiring a new datasource requires four approvals, you cannot run a weekly loop.
This is why FDE-shaped platforms (Foundry, Lattice, the modern AI platforms) are built around fast composition of typed primitives. The platform’s job is to make every step of the loop cheap.
Conversely, an FDE working with a slow platform should pick their loop speed accordingly. Two-week loops on a slow platform beat trying to force weekly loops and shipping garbage.
Loop pathologies
Waterfall creep
Symptoms: discovery stretches into week 3; the customer signs off on a long spec; you stop showing weekly progress.
Fix: re-anchor on a Friday demo, even if what you show is partial.
Demo theatre
Symptoms: every Friday demo goes well; nobody uses the system between demos.
Fix: instrument actual usage. Talk to the operator on Tuesday afternoon. Find out what they really do.
Prototype-to-production gap
Symptoms: you have shipped six prototypes; none of them are on a path to running in production.
Fix: at the start of week 3 at the latest, pick the one most likely to survive and start hardening it. A pile of throwaway prototypes is not an engagement outcome.
Loop divergence
Symptoms: each week you build something unrelated to last week’s learning.
Fix: write down the revised problem statement at the end of each week. Read it at the start of next.
Customer disengagement
Symptoms: the customer is “too busy” for Friday demos; sit-alongs get cancelled; emails go unanswered.
Fix: this is a red flag. The engagement may be politically dead. Escalate within both organizations. Do not just keep building.
A worked example
You are in week 2 of the Northbound Freight engagement.
Discover (Monday): Sit-along reveals the dispatcher reroutes a load when she sees the driver’s ETA slip by >15 min. She does this manually by looking at three browser tabs.
Prototype (Tue–Wed): You build a single screen that lists active shipments with their live ETA delta against the plan, sortable, with a highlight for any delta >15 min.
Deploy (Thursday morning): You install it on the dispatcher’s laptop. She bookmarks it. You leave.
Measure (Thursday afternoon + Friday morning): Logs show she refreshed the screen 40 times Thursday. Friday morning she tells you she rerouted two loads using it, but also that one ETA was wrong because the GPS feed lags 10 minutes on rural routes — and she didn’t trust the system after that.
Iterate (Friday afternoon retro): Next week’s problem statement: “Surface GPS feed freshness on the ETA card, and either show a confidence interval or hide the ETA when the feed is stale.”
That is one healthy loop. Six of those, in sequence, ship a system.
Key terms to remember
- Loop — one iteration of discover → prototype → deploy → measure → iterate, ideally weekly
- Problem statement — the single-paragraph artifact that anchors a loop
- Sit-along — observational input to discover
- Deploy — get the prototype to a real user, not necessarily production
- Retro — the short weekly review where iteration happens
What’s next
This concludes Phase 1 — the mindset of FDE work. You now know what an FDE is, how the role is delivered, and the loop that defines the work.
In Phase 2 we move from mindset to method, starting with the first thing an FDE actually does on Monday of week 1: discovery and stakeholder interviewing.