Course Content
Versioning, Branching, and Migrations
Evolve the ontology safely: branches, semantic versioning, backfills, and breaking-change strategy
Ontologies change
A perfect ontology designed today will be wrong in six months — not because you were sloppy, but because the business changed. New regulations split a property in two. A new product line introduces an object type. A renamed enum value becomes a breaking change.
The question is not whether the ontology changes — it is how you change it without breaking the consumers depending on it.
Versioning
Every ontology should have a version. The most common scheme is semantic versioning — MAJOR.MINOR.PATCH.
// ontology.config.ts
export default defineOntology({
name: "logistics-ontology",
version: "1.4.2",
// ...
});What goes where:
| Bump | Examples |
|---|---|
| PATCH (1.4.x) | Description edits, new derived property powered by an existing function, bug fixes in a function body |
| MINOR (1.x.0) | New object type, new optional property, new link type, new action |
| MAJOR (x.0.0) | Removed object type, removed/renamed property, type change on a property, removed action, removed enum value |
The rule is: minor and patch changes never break consumers. A consumer pinned to 1.4.0 should keep working when the ontology moves to 1.5.0 or 1.4.7.
Major bumps are allowed to break — but only after a deprecation window (more below).
Pinning and discovery
Consumers should pin to a specific minor version range:
// in a consumer
const sdk = createOntologyClient({
baseUrl: "...",
ontologyVersion: "^1.4", // accepts 1.4.x, 1.5.x ... never 2.0
});A ^1.4 range protects against accidental majors but absorbs new features. A few platforms also support multiple live versions at once (v1, v2 served side-by-side) so consumers can migrate at their own pace.
Branches
A branch is a copy of the ontology you can edit without affecting main. Usage cases:
- Experimentation — try a new model without breaking dashboards.
- Major-version preparation — work on
v2whilev1keeps serving. - Backfill validation — run a migration end-to-end on a branch first.
Workflow:
# Create a branch
pnpm ontology branch create v2-prep --from main
# Edit on the branch
git checkout -b v2-prep
# ... make changes ...
# Validate against the branch's data
pnpm ontology validate --branch v2-prep
pnpm ontology query --branch v2-prep "..."
# Merge back to main when ready
pnpm ontology branch merge v2-prep --into mainBranches inherit datasources but can override them — useful when v2 needs a different upstream table.
Two rules for branches:
- Short-lived. A branch open for six months becomes a parallel ontology nobody understands. Time-box.
- No production traffic on branches. A branch is for development and review, not a permanent “staging.”
Migrations
When the schema changes in a way that affects stored data, you run a migration. Migrations are versioned files in migrations/:
migrations/
├── 0001-initial.ts
├── 0002-add-shipment-priority.ts
├── 0003-split-customer-name-into-first-last.ts
└── 0004-deprecate-legacy-region-codes.tsA migration spells out three things:
- Schema change. What changes structurally.
- Data change. How existing rows are transformed.
- Rollback plan. What to do if it goes wrong.
Migration shape
// migrations/0003-split-customer-name-into-first-last.ts
import { defineMigration } from "ontology-sdk";
export default defineMigration({
id: "0003-split-customer-name",
description:
"Split Customer.companyName into companyName (legal name) and " +
"displayName (preferred name shown in UI).",
bumpsVersion: "minor", // adds a new property; old one stays
schemaChange: ({ ontology }) => {
ontology.objectType("Customer").addProperty("displayName", {
type: "string",
nullable: true,
});
},
dataChange: async ({ ontology, batchedUpdate }) => {
const customers = await ontology.objectType("Customer").all();
await batchedUpdate(customers, (c) => ({
// Initialize displayName from companyName; users can edit later
displayName: c.companyName,
}));
},
rollback: ({ ontology }) => {
ontology.objectType("Customer").removeProperty("displayName");
},
});The migration runs once, recorded, and never re-runs. On the next deploy, only newer migrations execute.
Backfills
A migration that adds a non-nullable property to existing data needs a backfill — populating the new column for every existing row before the property’s non-nullability constraint applies.
Pattern:
schemaChange: ({ ontology }) => {
// Step 1: add as nullable
ontology.objectType("Shipment").addProperty("priority", {
type: "enum",
enumRef: "ShipmentPriority",
nullable: true,
});
},
dataChange: async ({ ontology, batchedUpdate }) => {
// Step 2: backfill every row with a sensible default
const shipments = await ontology.objectType("Shipment").all();
await batchedUpdate(shipments, () => ({ priority: "standard" }));
},
postSchemaChange: ({ ontology }) => {
// Step 3: tighten to non-nullable
ontology.objectType("Shipment").property("priority").setNullable(false);
},Three phases — add nullable, backfill, tighten — let you migrate billions of rows without a long write lock and without rejecting writes during the migration.
Breaking changes — the deprecation dance
Sometimes you really do need to remove a property or rename an enum value. Do it in stages:
Stage 1 — Deprecate (minor version)
Mark the old thing as deprecated. Keep it working.
ontology.objectType("Customer").property("regionLegacy").deprecate({
reason: "Replaced by `region`; same values but new enum reference.",
removalDate: "2026-12-01",
});Consumers see deprecation warnings in their SDK; tooling flags references in code.
Stage 2 — Dual-write (minor version)
Both regionLegacy and region are maintained. New writes go to both; the property’s reads return the canonical one.
Stage 3 — Stop dual-writing (minor version)
After consumers have migrated, stop maintaining the deprecated property. It is still present and readable but will not change.
Stage 4 — Remove (major version)
After the removal date, drop the property entirely in a major-version migration. Provide a final migration that helps remaining consumers see the breakage clearly:
// migrations/0042-remove-customer-region-legacy.ts
schemaChange: ({ ontology }) => {
ontology.objectType("Customer").removeProperty("regionLegacy");
},
rollback: ({ ontology }) => {
ontology.objectType("Customer").addProperty("regionLegacy", { ... });
},This dance feels slow. It is. It is also the only way to evolve schemas without breaking the dozens of consumers downstream.
Versioning with branches together
A typical major-version workflow:
main (v1.7.3)
│
├── branch: v2-prep
│ │
│ ├── migration: split companyName
│ ├── migration: rename "ShipmentPriority" enum values
│ ├── migration: drop deprecated regionLegacy
│ │
│ └── ready for review ─┐
│ │
├── (consumers migrate to v2-compatible code)
│ │
└── merge v2-prep → v2 main ──┘ (v2.0.0)Two key disciplines:
- The branch is review-able. Open it as a PR. Other teams should be able to read the migrations and comment.
- Consumers migrate first. When
v2ships, consumer code already targets it. The merge is the celebration, not the crisis.
What about data changes that are not schema changes?
Sometimes the model is fine but the data is wrong — a botched ingestion, a corruption from a source system, a one-off correction.
These belong in admin actions, not migrations:
// ontology/action-types/admin-correct-shipment-weight.ts
export const adminCorrectShipmentWeight = defineActionType({
apiName: "adminCorrectShipmentWeight",
parameters: {
shipmentId: t.string(),
correctedWeightKg: t.double(),
reason: t.string({ minLength: 20 }),
},
permissions: { invoke: ["admin"] },
effects: async ({ params, mutate }) => {
await mutate.update(Shipment, params.shipmentId, {
weightKg: params.correctedWeightKg,
});
},
});Admin actions:
- Are explicit (
admin*prefix). - Restricted to admins.
- Require a reason for audit.
- Are tested.
Far better than running raw SQL. The audit trail will save you when someone asks “why is this row different from the source?”
Communicating changes
Every change — minor or major — needs release notes consumers can read:
# v1.5.0 — 2026-05-15
## Added
- `Customer.displayName` (nullable) — preferred name for UI display.
Defaults to `companyName` when not set.
## Deprecated
- `Customer.regionLegacy` — use `Customer.region`. Removal in v2.0.
## Fixed
- `customerLifetimeValue` now ignores refunded orders.Consumers should be able to read these and know exactly what they need to do — if anything.
Anti-patterns
Anti-pattern 1 — Schema changes without migrations. Editing the object-type file and pushing it without a migration breaks ingestion or causes silent data loss. Every schema change either has a migration or is provably compatible.
Anti-pattern 2 — Big-bang major versions. A v2 that changes 30 things at once is unreviewable and untestable. Break it up.
Anti-pattern 3 — Skipping the deprecation window. “Nobody is using it” — until they are. The deprecation dance exists because you cannot know all consumers.
Anti-pattern 4 — Migrations that take hours of write downtime. Migrations should be online — readable during, writable after the add-nullable stage. If a migration requires downtime, plan it, schedule it, communicate it.
Key takeaways
- Use semantic versioning; minors and patches never break consumers.
- Branches let you experiment and prepare major changes safely.
- Migrations describe schema + data + rollback in one place.
- Deprecate, dual-write, remove — the only safe path through breaking changes.
- Release notes are part of the contract.
What’s next
You can change the ontology safely. The last conceptual lesson — best practices and production patterns — covers the disciplines that keep an ontology healthy years into its life.
Move forward without leaving consumers behind. 🚦