Philosophy
Concentric layers, hardened buffers, roads engineered for control.
The historical jōkamachi
A jōkamachi (城下町, “town below the castle”) is a class of fortified settlement that emerged in Japan during the Sengoku period and was systematically deployed across the country throughout the Edo period. Rather than growing organically, a jōkamachi was a deliberately engineered system: a complete civic, military, and economic environment organized in concentric layers around a central stronghold.
At the core sat the castle itself — the seat of the daimyō and the administrative kernel of the entire settlement. Every other component of the town existed to serve, supply, or defend this center.
Surrounding the castle was the samurai ring, where retainers were quartered according to rank. The most trusted vassals lived closest to the keep; lower ranks occupied positions further out. This layer functioned as a hardened buffer: any threat reaching the castle had to first pass through the people whose entire purpose was to stop it.

Beyond the samurai districts came the working layers. Artisans and merchants were grouped by trade into dedicated quarters — a swordsmith district, a dyers’ district, a quarter for carpenters, another for rice merchants. Each guild operated within its own bounded zone, with its own internal rules, but contributed to the settlement as a whole. Resources flowed inward toward the castle; goods and services flowed outward to the population.
The outermost ring typically held temples, common dwellings, and the town’s periphery. These were positioned not only for religious and residential purposes but as a final layer of structured resistance: temple grounds were walled, elevated, and easily defensible.
Connecting these rings was a road network engineered for control rather than convenience. Streets were intentionally narrow, frequently angled, and broken up by T-junctions and dead-ends. Sightlines were short. A visitor following the main thoroughfare could not see the castle until permitted to; an invader navigating the same streets would be slowed, exposed, and channeled into predictable kill zones.
The result was a pattern that could be reproduced. Once the jōkamachi template was understood, daimyō across Japan deployed it wherever they consolidated power — adapting the layout to local terrain but preserving the layered logic. Edo, Osaka, Nagoya, Kanazawa, and Himeji all began as jōkamachi, and the bones of that original architecture remain visible in their street plans today.
Why this is a Kubernetes company
A jōkamachi is a layered system, engineered for resilience, where every ring exists to serve the next. A production Kubernetes cluster is the same kind of artifact: a control plane at the centre, hardened buffers around it, working layers further out, and roads — networking, ingress, policy — designed for control, not convenience.
We didn’t pick the name because Kubernetes is “like a castle.” We picked it because the engineering pattern — concentric, layered, reproducible across sites — is exactly the pattern good clusters share. And because, like the original jōkamachi, the value comes from deploying the same template repeatedly: each customer’s cluster is its own settlement, built from the same plan.
We don’t share clusters across customers. Namespace-based multi-tenancy on a shared control plane is a cost-cutting compromise that trades isolation for hardware utilisation, and the Kubernetes documentation itself flags it as inappropriate when tenants don’t fully trust each other. In a multi-customer setting, they don’t.
What that means in practice
Three principles fall out of this view, and they shape everything from how we price the platform to which datacentres we deploy in.
Affordable by deployment choice
High-availability Kubernetes does not have to cost what hyperscalers charge. The price you pay on EKS, GKE, or AKS is dominated by the managed-service tax and by egress, not by the underlying compute.
We run on Hetzner cloud VMs and bare-metal servers in whichever availability zones our customers need, because that combination — at the time of writing — gives the best price / sovereignty tradeoff. If a better substrate emerges, we’ll deploy there too.
On your terms
Every line of software and configuration we run in production is released as open source, with one exception: KAOS, the agentic operator, is closed today.
This isn’t lock-in by another name — it’s an honest statement about where the work is. The rest of the platform is the work of the CNCF community over the better part of a decade: settled, well-documented, and not improved by adding a licence on top of it. We didn’t write it, and we don’t add proprietary value to it, so it’s free for the same reason. KAOS is where we’re still tinkering — the operating playbook it reasons over, the maturity model that decides what it’s allowed to do, the decision-log telemetry that makes its actions auditable. That tinkering hasn’t reached a steady state yet, and we charge for the time it takes to get there.
If you’d rather not depend on KAOS, the cluster runs without it. Open-source agentic operators like kagent plug into the same hooks; the difference is how much of the operating playbook they ship with, and how much tinkering they require before they’re production-grade. Try them — that’s how we ended up writing KAOS.
Everything else is open and yours to keep. If you outgrow us, you take the platform with you — and you can swap KAOS for a different agent without re-architecting the cluster underneath. There is no proprietary control plane, no licence server, and no vendor-specific cloud service that pins you to us.
This is not generosity. It’s the only way to make the previous principle credible: “affordable” stops meaning anything if leaving costs you a re-platforming project.
Off the public cloud
The two principles above are only compatible if we stay off hyperscaler-managed services. Affordable + sovereign + open requires control of the substrate. EKS, GKE, and AKS will never be affordable, and will never be on your terms — that is not their goal.
So we don’t use them. We run on the metal (and on cloud VMs that sit next to the metal in the same datacentre), and we operate the cluster ourselves.
Why Hetzner, specifically
- Bare-metal pricing that beats every hyperscaler at every spec.
- Cloud VMs in the same datacentres, for elastic edges of the cluster.
- EU jurisdiction available where you want it — Hetzner operates German-Finnish datacentres, meaningful for GDPR-bound customers, but the platform itself isn’t pinned to a continent.
- A track record long enough that “Hetzner reliability” is a known quantity, not a bet.
This is a pragmatic choice, not an ideological one. If a comparable provider emerges in another jurisdiction our customers care about, we will deploy there too — the platform is designed to be substrate-agnostic, and the architecture booklet covers the seams along which a different provider would slot in.