Press ESC to exit fullscreen
📖 Lesson ⏱️ 60 minutes

Setting Up Your Environment

Install the toolchain, configure your workspace, and create your first empty ontology

Pick your stack

Most ontology platforms support multiple authoring languages. For this course we use TypeScript as the reference, because:

  • The type system is expressive enough to mirror ontology types.
  • Object sets, links, functions, and actions feel natural as fluent APIs.
  • Tooling (LSP, formatter, test runner) is mature and lightweight.

If you prefer Java (Spring-style, strongly typed, JVM tooling) or Python (dataclass-based, friendly to data scientists), the patterns transfer one-to-one. Pick what your team already runs in production.

For local development you will need:

  • Node.js 20+ (or your platform’s runtime)
  • pnpm or npm
  • Docker for running the local validation server
  • git for version control

This course uses platform-agnostic syntax. When we show “command X”, substitute the equivalent in your platform.

Project layout

A workspace that scales — the layout we will use throughout the hands-on lessons:

my-ontology/
├── ontology/
│   ├── object-types/
│   │   ├── customer.ts
│   │   ├── order.ts
│   │   ├── shipment.ts
│   │   └── ...
│   ├── link-types/
│   │   ├── customer-placed-order.ts
│   │   └── ...
│   ├── action-types/
│   │   ├── mark-shipment-delivered.ts
│   │   └── ...
│   ├── functions/
│   │   ├── customer-lifetime-value.ts
│   │   └── ...
│   ├── interfaces/
│   │   └── locatable.ts
│   └── enums/
│       └── shipment-status.ts
├── datasources/
│   ├── customers.dataset.yaml
│   ├── shipments.stream.yaml
│   └── ...
├── policies/
│   ├── markings.yaml
│   └── access-policies.yaml
├── migrations/
│   └── 0001-initial.ts
├── tests/
│   ├── fixtures/
│   ├── object-types.test.ts
│   ├── actions.test.ts
│   └── functions.test.ts
├── ontology.config.ts
├── package.json
└── README.md

One file per definition. A 12-line file per object type is much better than a 1,200-line everything.ts. Diff readability and code review depend on it.

Definitions grouped by primitive. Object types in one folder, actions in another, functions in a third. When you grep for “where is assignDriver defined?” the answer is unambiguous.

Policies and migrations separate. Security and schema evolution are first-class concerns; they deserve their own top-level folders.

Initialize the project

Bootstrap a new ontology project:

mkdir my-ontology && cd my-ontology
pnpm init -y
pnpm add -D typescript @types/node tsx vitest
pnpm add ontology-sdk        # placeholder — your platform's SDK
npx tsc --init --strict --target es2022 --module nodenext

Create the config file:

// ontology.config.ts
import { defineOntology } from "ontology-sdk";

export default defineOntology({
  name: "logistics-ontology",
  apiName: "logistics",
  version: "0.1.0",
  rootDir: "./ontology",
  datasourcesDir: "./datasources",
  policiesDir: "./policies",
});

The exact API differs per platform; the idea is the same: declare what your project contains and where each piece lives.

Your first object type

Create a minimal Customer object type as a smoke test:

// ontology/object-types/customer.ts
import { defineObjectType, t } from "ontology-sdk";

export const Customer = defineObjectType({
  apiName: "customer",
  displayName: "Customer",
  description: "A business or individual under a signed agreement.",
  primaryKey: "customerId",
  titleKey: "companyName",
  properties: {
    customerId:  t.string({ pattern: /^cust_[a-zA-Z0-9]{8,16}$/ }),
    companyName: t.string(),
    region:      t.enum("Region", ["NA", "EU", "APAC", "LATAM"]),
    signedAt:    t.timestamp(),
  },
});

Validate locally:

pnpm ontology validate

Expected: ✓ Customer — schema valid.

If you hit errors, the most common are:

  • Primary key not in properties — the primaryKey value must match a key in properties.
  • Enum referenced but not defined — define enums in ontology/enums/ if reused across types.
  • Display name missing — required for UI rendering.

A first datasource binding

Create a fixture dataset and bind the object type to it:

# datasources/customers.dataset.yaml
apiName: customers_v0
displayName: "Customers v0 (fixture)"
kind: dataset
location: ./tests/fixtures/customers.csv
schema:
  - { name: customer_id,  type: string }
  - { name: company_name, type: string }
  - { name: region,       type: string }
  - { name: signed_dt,    type: string }   # ISO timestamp string

Bind it:

// ontology/object-types/customer.ts
export const Customer = defineObjectType({
  // ... as before
  source: {
    datasource: "customers_v0",
    primaryKey: "customer_id",
    mapping: {
      customerId:   "customer_id",
      companyName:  "company_name",
      region:       (row) => row.region.toUpperCase(),
      signedAt:     (row) => new Date(row.signed_dt),
    },
  },
});

Pop in a tiny fixture CSV:

# tests/fixtures/customers.csv
customer_id,company_name,region,signed_dt
cust_alpha001,Acme Logistics,EU,2024-03-12T09:00:00Z
cust_alpha002,Globex Transport,NA,2025-01-22T15:30:00Z
cust_alpha003,Initech Freight,APAC,2025-09-08T11:15:00Z

Smoke test

Spin up the local validation server and query:

pnpm ontology dev

In another terminal:

pnpm ontology query "Customer.all().limit(10)"

Expected output:

[
  { customerId: "cust_alpha001", companyName: "Acme Logistics",  region: "EU",   signedAt: ... },
  { customerId: "cust_alpha002", companyName: "Globex Transport", region: "NA",   signedAt: ... },
  { customerId: "cust_alpha003", companyName: "Initech Freight",  region: "APAC", signedAt: ... },
]

You have a working ontology — three customer objects, one type, one datasource — in under fifty lines of code.

Version control hygiene

A few git habits that pay off:

Branch by change, not by feature. A branch that adds the Customer object type. Another that wires the datasource. Another that adds the link type. Small, reviewable diffs.

Commit the generated artifacts only if your platform requires it. Most platforms regenerate types on every install — commit only sources.

Tag releases. Your ontology will be versioned, and consumers will pin to versions. git tag v0.1.0 makes the surface visible.

Keep tests/fixtures/ small and committed. Reproducible smoke tests need fixed data.

CI pipeline — the minimum

Even on day one, set up CI:

# .github/workflows/validate.yml
name: validate
on: [push, pull_request]
jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: pnpm install
      - run: pnpm ontology validate
      - run: pnpm test

Three checks: install, validate the ontology, run tests. As you add object types and actions, expand the suite — but get the loop working immediately.

Editor setup

For TypeScript users:

  • VS Code with the platform’s extension, if available — typically gives you hover docs on ontology types and “go to definition” across the codebase.
  • ESLint with the ontology-sdk preset — catches dead links and undefined enums before validate.
  • Prettier with printWidth: 100 — every team has a style; pick one and commit it.

Common pitfalls at setup

1. Local datasource not found. Make sure the path in datasources/*.yaml is relative to the project root, not the YAML file.

2. Schema drift on first ingest. If your CSV has trailing whitespace or BOM bytes, mappings break in confusing ways. Normalize fixtures once.

3. Validate passes but query returns nothing. Almost always a mapping bug — the primary key in the dataset is empty or unmatched. Check pnpm ontology debug:source customers_v0.

4. SDK version mismatch. Pin the SDK in package.json; do not float to latest. Ontology APIs evolve.

What you have now

  • A project layout that will hold dozens of object types without becoming chaos.
  • One end-to-end smoke test from datasource → ontology → query.
  • A CI pipeline that catches schema errors on every PR.
  • A development loop: edit → validate → query → commit.

This is the foundation we will build on for the rest of the course.

What’s next

We have an environment. Next we think before we build: in the next lesson — Designing Your Object Model — we go from a business domain to a complete schema, on a whiteboard, before writing a single line of code.


The tools are in your hands. 🧰