Why Kubernetes?
Why prescribed-once infrastructure tools fray on dynamic systems, and why reconciliation loops don’t.
Stub.
Jōkamachi Systems has roots in functional programming and classic Linux systems administration. The natural gravitational pull of that combination is towards prescriptive tools — Terraform, NixOS, terranix. You declare how the world ought to be, run the apply, and the tool drives reality towards your specification. Apply once. Done.
This works beautifully when reality cooperates by holding its shape.
The polite fiction of a snapshot
In What functional programmers get wrong about systems (2026), Ian Duncan observes that
The unit of correctness in production is not the program. It is the set of deployments.
The model we hold of the running system, and the running system itself, drift from each other constantly. Production is never a single coherent snapshot of the world; it is always an ensemble of versions that happen to be running simultaneously, and that drift has to be accounted for.
Terraform: declare, apply, drift
Terraform’s mental model is “capture the desired shape of the
world in code, run terraform apply, the world becomes that
shape.” This is a wonderful abstraction when the world is a small
set of slow-moving resources — a VPC, a few S3 buckets, an IAM
role.
It starts to fray when the world is fast-moving, especially when
something outside Terraform changes the world without telling
Terraform. The official term for this is drift. The practical
experience of drift is that you terraform plan six months after
the last apply and Terraform proposes to delete things you forgot
existed, recreate things that have moved, and reset configuration
an operator manually changed mid-incident.
Terraform’s response to drift is to re-apply, reasserting the declared shape. That works exactly once before the next drift event. The fundamental assumption — the world retains the shape we left it in — has to be re-asserted on every cycle, by hand.
Kubernetes: declare, reconcile, never stop
Kubernetes makes a different assumption: the world will not retain its shape, and that’s fine. Every controller in the system runs the same loop, forever:
- Read the desired state from the API server.
- Read the actual state of the world.
- Compare them.
- Take the smallest action that closes the gap.
- Sleep briefly. Go to 1.
A pod crashes — the ReplicaSet controller starts another. The API server holds the spec; the cluster’s actual state is whatever it happens to be right now; the controllers are the polling that closes the gap, forever.
Kubernetes is like Terraform if terraform apply ran every few
seconds, in parallel, scoped to thousands of small resources, with
retries and exponential backoff baked in.
A worked contrast
Imagine running one small web service.
- The Terraform way. You write an
aws_instanceresource,terraform apply, and the instance comes up. It runs for a year. Someone resizes it to add memory; Terraform doesn’t know. The instance dies; Terraform doesn’t notice until the next plan, at which point it offers to delete and recreate it. The intent (one running instance) is captured; the execution of that intent (continuously re-creating the instance when it dies) is your responsibility. - The Kubernetes way. You write a Deployment with one replica,
kubectl apply -f, and the pod comes up. The pod dies thirty times a day for unrelated reasons; Kubernetes notices each time within a few seconds and starts another. You never know it happened until you read the event log, because the intent and the execution are both Kubernetes’ job.
Terraform’s model is “describe the world now; reconciliation is not in scope”. Kubernetes’ model is “describe the world always; reconciliation is the whole point”.
Organisations are dynamic systems
The reason this matters past taste: organisations are dynamic systems. A node’s disk fills up; a region briefly disconnects; a third-party API returns 503s for an hour. The world your infrastructure lives in does not stop changing.
That’s why Kubernetes. The next chapter is about why we don’t write the spec for Kubernetes by hand-stitching Helm charts — Why Kubenix? picks up there.