The Jails Answer

Idiomatic, cheap, and a different security model.

Before talking numbers, a concession and a framing. Jails are not a FreeBSD equivalent of a microVM. They are a FreeBSD equivalent of namespaces. They share the host kernel. If that’s disqualifying for your threat model, go to bhyve. If you’re comparing against Docker for workloads you’d otherwise run on a shared-kernel system anyway, jails are the fair benchmark — and they are remarkably, almost embarrassingly fast.

What jails give you

For a whole class of agent workloads — run LLM-generated Python in a captured-syscalls sandbox that can’t reach the internet except through a pf-policed egress allow-list, with a ZFS-cloned rootfs so each agent is isolated at the filesystem level — jails are the right tool and they are not a compromise. They’re just what FreeBSD has built for decades to do this.

What jails don’t give you

The shared-kernel framing is what’s load-bearing. For LLM-generated code under an adversarial threat model, the question is whether a Python runtime can exercise enough kernel surface to find an escape. The pragmatic answer in 2026 is usually “in theory yes, in practice no, but we don’t know what we don’t know.” For every threat model that demands better than that, microVMs are the answer.

The numbers

All measurements on honor (FreeBSD 15.0-RELEASE-p4, amd64). Methodology at /appendix/bench-rig. “Cold start” = wall-clock from jail -c invocation to a canned echo ready returning inside the jail.

Single-sandbox cold start

Chart · Cold start (mean, concurrency=1)

05001,0001,5002,000↑ mean cold-start (ms)Jail — rawJail — VNET + pfJail — ZFS cloneJail — VNET + pf + ZFS clone1229.52008.8121.5345.2

▸ reproduce ·  mise run bench:jail-raw ·mise run bench:jail-vnet-pf ·mise run bench:jail-zfs-clone ·mise run bench:jail-vnet-zfs-clone · methodology

The three jail configurations differ in where they spend time:

Tail latency under concurrency

Chart · Cold start percentiles (concurrency=50)

Jail — rawJail — VNET + pfJail — ZFS cloneJail — VNET + pf + ZFS clone020,000020,000020,000↑ msp50p95p99

▸ reproduce ·  mise run bench:jail-raw ·mise run bench:jail-vnet-pf ·mise run bench:jail-zfs-clone ·mise run bench:jail-vnet-zfs-clone · methodology

At concurrency 50, serialization sources become visible — the kernel’s per-jail setup locks, ZFS transaction group commits under concurrent clones, pf ruleset reloads if policy changes during the burst. The percentiles tell the story of where the tail lives.

Idle memory overhead

Chart · Idle RSS

05001,0001,5002,000↑ mean idle RSS (KB)Jail — rawJail — VNET + pfJail — ZFS cloneJail — VNET + pf + ZFS clone

▸ reproduce ·  mise run bench:jail-raw ·mise run bench:jail-vnet-pf ·mise run bench:jail-zfs-clone ·mise run bench:jail-vnet-zfs-clone · methodology

Per-jail RSS at idle, measured across 32 concurrent jails with a simple sleep 60 running in each. This is the closest apples-to-apples to Tencent’s “<5MB overhead per instance” for microVMs — except jails are sharing the host kernel, so the base cost is lower still. There is no “guest kernel overhead” to amortize.

▸ reproduce  mise run bench:jail-raw · script

▸ reproduce  mise run bench:jail-vnet-pf · script

▸ reproduce  mise run bench:jail-zfs-clone · script

Reading the results

Honor: FreeBSD 15.0-RELEASE-p4, template size 374 MB, 30 samples at cc=1/10, 50 samples at cc=50, fresh bench agent state each run. Numbers rounded:

configcc=1 meancc=10 meancc=50 meancc=50 p95idle RSS
jail-raw1230 ms3790 ms2.0 MB
jail-vnet-pf (cp -R)2010 ms5520 ms24 750 ms30 435 ms2.0 MB
jail-zfs-clone122 ms216 ms302 ms361 ms2.0 MB
jail-vnet-zfs-clone345 ms3490 ms3460 ms4730 ms2.1 MB

The spread is an order of magnitude between jail-raw and jail-zfs-clone: the rootfs strategy dominates everything else. A 374 MB cp -R burns ~1.1 seconds by itself; everything else on the jail-raw critical path is in the noise.

When jails are the right answer

When they aren’t

For those, the answer is /essays/freebsd-bhyve.