Press ESC to exit fullscreen
📖 Lesson ⏱️ 75 minutes

Working on Customer Infrastructure Securely

Operating inside air-gapped networks, classified environments, and customer-managed clouds without breaking trust

Where FDE work happens

Most software engineers work in a single environment: the cloud account their company owns. FDE work is different. You operate, often simultaneously, in multiple environments that are not yours:

  • Your company’s dev environment (you own it)
  • The customer’s production environment (they own it; you have a guest visa)
  • The customer’s classified or restricted environment (they own it; you have a very narrow guest visa)
  • A network segment that does not connect to the internet at all

The mistake is to treat these as variants of the same setup. They are not. Each has a different threat model, a different operating discipline, and a different set of mistakes that get you removed from the engagement.

The four customer-environment topologies

Every customer engagement maps to one of four topologies. The first decision of Phase 3 deployment is figuring out which one you are in.

Topology 1 — SaaS (you host, they consume)

The platform runs in your company’s cloud. Customer data comes in through approved integrations. Customer users come in through SSO.

Common in: vertical AI startups, commercial enterprise SaaS, pilot engagements.

What it changes for the FDE:

  • Compliance burden is on your company (SOC 2, ISO 27001, sometimes HIPAA / FedRAMP).
  • Customer security review is heavy upfront; once cleared, day-to-day operations are normal.
  • Data lives across organizational boundaries — customer’s data on your infra is your responsibility.

Topology 2 — Customer cloud (single-tenant in their account)

The platform runs in their AWS / Azure / GCP account, deployed by your team but under their billing and IAM.

Common in: Fortune 500 enterprises, defense, financial services.

What it changes for the FDE:

  • Their IAM, their VPC, their security groups. You operate inside their model.
  • Their cost is on their bill — over-provisioning has political consequences.
  • Their compliance regime applies, not yours.
  • Change control follows their CAB cadence, not your sprint.

Topology 3 — On-prem (their datacenter, your software)

The platform runs in the customer’s datacenter. You ship images / artifacts; their ops team deploys.

Common in: government, defense, regulated industries, anyone with sovereign-data requirements.

What it changes for the FDE:

  • Your CI/CD does not reach the deployment environment.
  • Software gets in via signed artifacts shipped through approved channels (and sometimes through customs).
  • Updates have lead time measured in weeks.
  • You are dependent on the customer’s ops team for everything operational.

Topology 4 — Air-gapped / classified

A specific subset of on-prem: the deployment environment has no connection to the internet, and possibly classified handling requirements.

Common in: intelligence community, certain defense programs, some bank trading floors, some critical-infrastructure customers.

What it changes for the FDE:

  • You cannot npm install anything.
  • You cannot pull a Docker image.
  • All dependencies must be mirrored to the customer’s internal registry, ahead of time, signed.
  • Some integrations (LLMs, public APIs, telemetry) simply do not exist for you.
  • You may need a clearance to be on site.

A useful exercise on day one of any new engagement: ask the customer’s IT director to draw the topology on a whiteboard. Most can do this in five minutes. Most engagements fail to do this for six weeks. Don’t be that engagement.

What changes by topology

A condensed table for quick reference.

ConcernSaaSCustomer cloudOn-premAir-gapped
Dependenciesnpm/pip OKnpm/pip OKPre-mirroredAll mirrored + signed
CI/CDYour pipelineTheir cloudArtifact handoffSneakernet
SecretsYour KMSTheir secret storeTheir vaultHardware tokens / paper
TelemetryYour stackConfigurableTheir stack onlyNone outbound
AI / LLMPublic API OKOften OKPrivate modelOn-prem model only
UpdatesContinuousWeeklyMonthlyQuarterly+
You can SSHYesSometimesRarelyNever
DebuggingYour toolsTheir VDIOn-paperIn their seat

The further right, the more your work consists of careful preparation and the less of live debugging.

The “guest in their house” mindset

A reliable mental model for working inside any customer environment that is not yours: you are a guest in their house.

This means:

  • You do not move furniture. Don’t change firewall rules, don’t open ports, don’t install agents that weren’t approved.
  • You don’t read mail that isn’t yours. Don’t query tables you weren’t given access to, even if you can. Especially if you can.
  • You leave the kitchen cleaner than you found it. If you debug something on a shared box, clean up the temp files, remove your test data, restore configs.
  • You ask before opening doors. Want to add a new database? A new dependency? A new outbound URL? Ask first. The first time you act without asking, trust evaporates.
  • You tell them when you broke something. A self-reported small mistake is forgivable. A discovered cover-up is fatal to the engagement.

Senior FDEs internalize this so completely that it ceases to feel like a constraint and starts to feel like respect.

Least privilege, on day one

The single biggest mistake junior FDEs make in customer environments: asking for admin access “to be productive.”

The senior FDE move is the opposite: ask for the minimum, and add scope deliberately when the work requires it. This:

  • Protects you from accusations if anything goes wrong
  • Builds the customer’s trust steadily as you demonstrate good judgment
  • Models the discipline you want their team to adopt after you leave

What “minimum” looks like in practice:

  • Read-only on data sources until you have an explicit need to write
  • One service account per integration, not a shared “fde” account
  • Personal access where required by audit (you logged in, not “the team logged in”)
  • Time-bound access for sensitive operations (a 24-hour elevated session, not standing admin)
  • MFA everywhere, including service accounts where the platform supports it

Secrets, properly

Where credentials live decides your security posture. The rules from the API integration lesson are non-negotiable here:

  1. Secrets live in the customer’s secret store. Not in your Git repo, not in their wiki, not in a Slack DM, not in .env, not in a config map, not in CI variables.
  2. You commit a secret inventory to the engagement: every credential, what it is for, where it lives, who can rotate it, when it was last rotated.
  3. You commit a rotation procedure that the customer’s team can execute after you leave.
  4. You commit a break-glass procedure for compromise: who to call, what to disable, how to recover.

A practical detail many FDEs miss: the customer’s existing systems probably do not follow this discipline. Their SAP service account password may have been the same for 8 years and live in a Word document. Do not propagate that practice into the system you build. Set the standard cleanly from day one, even if it is higher than the customer’s baseline. They will thank you.

Audit logging that survives a security review

Every action on the platform — at minimum every action in the semantic layer’s action types — emits an audit log entry. The required fields:

timestamp_utc          2026-05-14T11:23:45.182Z
actor                  user:maria.hernandez@northbound.com
action                 assignDriverToStops
parameters             { driver_id: 4012, stop_ids: [891, 892], ... }
target                 DriverAssignment:da_2026-05-14_4012_a
outcome                success
client_ip              10.42.6.18
session_id             sess_b9e2c4...
correlation_id         req_ab12cd34...

Two more pieces you want:

  • Tamper-resistant storage. Audit logs live somewhere the platform operators cannot silently rewrite. Typically: append-only storage, hash-chained, or shipped to a customer-owned SIEM.
  • Retention policy. Documented per regulatory regime. Default to longer than you’d guess: 7 years for SOX-touched data, 3 years minimum for general operational audit.

Audit logging done poorly is worse than no audit logging — it gives a false sense of accountability while being useless under scrutiny. Done well, it is one of the most reusable parts of any FDE deployment.

Data classification and handling

Many customer datasets carry classification handling requirements you must respect.

Common regimes you will encounter:

RegimeWhere you’ll meet itKey constraint
PIIAlmost every customerEncryption at rest + in transit, access logged, right-to-be-forgotten
PHI (HIPAA)Any healthcare customerBAA required; minimum-necessary access; audit on every access
PCIAny customer touching card dataNetwork segmentation; quarterly scans; specific log requirements
CUI (US government)DoD-adjacent commercial workNIST 800-171 controls; FedRAMP-Moderate or higher hosting
ITAR / EARDefense, aerospace, some techUS person handling; export controls; severe enforcement
Classified (CONFIDENTIAL, SECRET, TS)Intelligence and defense directClearances required; air-gapped facilities; storage and discussion rules
GDPR (EU personal data)Any EU customer/usersData residency, DPIA, breach notification timelines

A few survival rules across all of them:

  • Don’t take data home with you. Not in a Slack message, not in an email, not in a screenshot, not on your laptop unless your laptop is explicitly authorized.
  • Don’t write the data into your AI tools unless they are an approved processor for that regime. ChatGPT, generic LLM APIs, your favorite code-completion tool — most are not.
  • Don’t move data across regimes. Customer data tagged CUI does not get copied into a non-CUI environment “to debug.” If you need to reproduce a bug, reproduce it in their environment.
  • Don’t make joke screenshots. It is funny once and a career-ending incident later.

If the customer’s regime is unfamiliar, ask the customer’s compliance lead to do a one-hour briefing for you in week 1. They will say yes. They almost never get asked.

AI and LLMs in regulated environments

A frontier-of-the-job concern that did not exist five years ago and now defines many FDE engagements.

A simple decision matrix:

The customer’s data is…And LLMs need to read it…Then…
Public / commercialYesCommercial APIs OK if SLAs match
Customer-confidentialYesUse the customer’s licensed instance; never a public API
Regulated (PHI, CUI)YesOnly on-prem or customer-cloud-private models
ClassifiedYesOnly approved on-prem models; many cases not at all

The instinct to “just use Claude” or “just use GPT-4” needs to be filtered through this matrix every single time. The cost of getting it wrong is not “your account gets rate-limited” — it is breach notification, regulatory penalty, and loss of the customer.

This applies to your own developer tools too. The AI coding assistants you use for personal productivity must not be configured to send customer code to a non-authorized backend. Many corporate environments will configure this for you; many won’t. Check.

Your laptop and you

You are a security boundary. The customer’s IT team understands this; their procurement office calculated risk on this; the contract was priced with this in mind. Live up to it.

The baseline a serious FDE meets:

  • Full-disk encryption, with a strong password that is not your other passwords
  • Auto-lock under 5 minutes, screen lock when you walk away even for coffee
  • A password manager for every customer credential (and a clean separation between this engagement’s vault and your personal vault)
  • Hardware MFA (YubiKey, Titan) for high-risk accounts
  • Patched OS, patched browser, patched everything
  • No customer data on local disk except in approved working directories that get wiped at engagement end
  • A separate browser profile (or browser) for the customer; logged out, cleared, archived when the engagement ends

The customer’s IT team may require all of the above and more. Or they may issue you their corporate laptop, with their tooling installed, and forbid your laptop from touching their network. Either way: their rules win.

Travel and physical security

Less obvious but real: physical security at customer sites.

  • Visitor badges. Wear them visibly. Hand them back. Do not “borrow” someone else’s.
  • Tailgating. Do not hold the door for someone you do not know inside a controlled facility.
  • Phones in classified spaces. Many do not allow them. Find out before you arrive, not by setting off a klaxon.
  • Screen privacy. Your seatmate on the plane should not be able to read the customer’s load board over your shoulder. Get a privacy filter.
  • Lost or stolen devices. Report immediately to both the customer’s IT and your security team. Speed matters; cover-up is a career-ender.

The internal-tools mistake

A specific failure mode that has wrecked otherwise-good engagements: using your company’s internal tools for customer data.

Examples:

  • Pasting a customer query into your company’s general-purpose Slack channel for help
  • Logging customer error messages with full payloads to your company’s central log aggregator
  • Snapshotting customer data to “debug locally”
  • Using a personal GitHub gist to share a file with a colleague

Every one of these is a small action that can become a large incident. The rule:

Customer data stays inside the customer’s perimeter. Discussion of customer data follows the same rules.

If you need a colleague’s help, bring them into the engagement properly — give them the customer’s NDA, the customer’s accounts, the customer’s tools — or strip the data of identifiers and discuss the pattern, not the data.

A pre-deployment security checklist

Before any iteration goes to production at a customer site, you should be able to answer yes to all of these.

  • Topology is documented (SaaS / customer cloud / on-prem / air-gapped) and acknowledged by IT
  • All credentials live in the customer’s secret store
  • Service identities are scoped per integration; no shared accounts
  • Audit logging is on, retained per policy, going to tamper-resistant storage
  • PII / PHI / classified handling rules are documented and followed
  • Network egress allowlist is documented and approved (for customer cloud / on-prem)
  • Backup and restore procedures are documented and tested
  • Incident response runbook exists and the customer’s on-call team has it
  • The break-glass account is documented and the credentials live somewhere secure
  • Your laptop is compliant with customer requirements
  • Your team’s customer-data discussion happens only inside customer channels
  • You can articulate the regulatory regime that applies, in plain English, to the customer’s compliance lead

Skip none.

Common failure modes

Mistakes specific to this work:

  • Pasting a token into a screenshot for a bug report → credential rotation, security review
  • Bringing customer data into your AI assistant → potential breach, regulator notification
  • Granting yourself extra access “temporarily” → audit finding; trust erosion
  • Working from a coffee shop on a sensitive customer’s data → known-bad practice in most regulated regimes
  • Using a personal email for a customer account → IAM hygiene failure; cleanup at engagement end is painful
  • Storing customer data in your personal cloud (Dropbox, iCloud) → compliance incident
  • Not reporting a small mistake → the small mistake becomes a discovered cover-up

Closing Phase 3

You have, by the end of Phase 3:

  • Data flowing from messy real-world sources, with validation, replay, and observability (Data Plumbing in the Wild)
  • A semantic layer that exposes a typed, governed model of the customer’s business (Designing the Semantic Layer)
  • Integrations to the customer’s surrounding systems, with reliability built in (API and Integration Patterns)
  • A secure operating posture that respects the customer’s environment and survives audit (this lesson)

This is the foundation. Nothing in Phase 4 — the apps, the agents, the dashboards — works without it. Many FDE engagements fail by skipping or rushing this foundation. Yours will not.

Key terms to remember

  • Topology — SaaS, customer cloud, on-prem, or air-gapped
  • Least privilege — start with the minimum access, add deliberately
  • Audit log — tamper-resistant, retained, structured record of every action
  • Data classification regime — PII, PHI, PCI, CUI, ITAR, classified, GDPR
  • Break-glass procedure — what to do when a credential is compromised
  • Egress allowlist — explicit list of outbound destinations approved for production
  • Internal-tools mistake — using your company’s general tools for customer-perimeter data

What’s next

Foundation poured. In Phase 4 the keyboard work shifts upward — onto the semantic layer, you start building the operational applications that operators actually use. The next lesson kicks off with the workshop-style construction of the dispatcher’s app Maria has been waiting for since week 1.