The jail backend is the default sandbox substrate; bhyve is the second substrate for the hardware-isolated path. Session A built the pool-control plane and the guest image; this appendix is the substrate-verification receipt. Pool warm → checkout → SSH-resume → return → drain all gate-pass on honor.
Pieces
/vms/templates/<tpl>.img— raw UFS disk image. Built bybenchmarks/rigs/bhyve-sandbox-image-build.sh: mdconfig+mount the FreeBSD-15.0-RELEASE base, grow p4 to 10 GiB, drop/etc/rc.d/coppice-netto pick up per-entry/etc/coppice-instance(IP, hostname), install python311- py311-ipykernel via pkg-in-chroot, bake the pool SSH pubkey into
/root/.ssh/authorized_keys, setPerSourcePenalties noinsshd_config. Wall time on honor: ~7 min (dominated by UFS-in-vnode write throughput during python311 extraction).
- py311-ipykernel via pkg-in-chroot, bake the pool SSH pubkey into
/usr/local/sbin/coppice-bhyve-pool-ctl— shell lifecycle controller. Commands:init,warm <tpl> —count N,checkout <tpl>,return <id>,drain <tpl>,list [—json]. State at/var/db/coppice/bhyve-pool/<tpl>.json.coppbhyve0— dedicated bridge at10.77.0.1/24(disjoint from the jail bridgecoppicenet0/10.78.0.0/24). Pool entries get.50–.249./root/coppice-signing/pool-key{,.pub}— ed25519 keypair. The image-build script generates it on first run; the public half is baked into every pool image.
Lifecycle (hot path)
warmcopies the template image per-entry, patches/etc/coppice-instancevia mdconfig+mount inside alockf-guarded critical section (parallel warm would race devfs otherwise), starts bhyve (-H -A -P, needed for SIGSTOP/SIGCONT semantics), polls SSH, thenkill -STOPs the bhyve process.checkoutpicks the firstavailableentry, flips it toin-use, andkill -CONTs. SSH on the guest answers within ~150 ms on honor — dominated by TCP-probe latency plus sshd wake-up.returndestroys the in-use VM and respawns a fresh one from the template (v1 policy; per-entry ZFS snapshots are a future optimization).draintears every entry down, removes/vms/bhyve-pool/<id>.img, and deletes the state file.listafterwards is empty.
Smoke-rig receipt
benchmarks/rigs/bhyve-sandbox-pool-smoke.sh drives the full
lifecycle. Sample transcript on honor 2026-04-23:
smoke: warm wall time: 15s
smoke: checkout: id=pool-0 ip=10.77.0.50 pid=22149
smoke: checkout → ssh-ready: 147 ms
smoke: python3 1+1 = 2
smoke: checkout #2 (while #1 still in-use): id=pool-1 (!= pool-0, concurrency-safe)
smoke: return: pool-0 back to available
smoke: drain python-bhyve
template pool_size warm_s checkout_ms py_result
python-bhyve 2 15 147 2
smoke: ALL GATES PASSED
Full sample at benchmarks/rigs/bhyve-sandbox-pool-smoke.sample.txt.
Numbers, honestly
- Image build: 430 s wall on honor. Most of that is
pkg install python311extracting onto the mdconfig-backed UFS slice — dirty buffers flush at ~3 MiB/s sustained on a 5900HX laptop. Not fast, but it runs once. - Warm 2 entries: 15 s wall. Two parallel bhyveloads + two FreeBSD boots to sshd. Dominated by guest boot, not bhyve.
- Checkout → SSH-ready: 147 ms wall. This is the user-visible
“create latency” — what an SDK client would observe from
checkoutto being able tossh root@<ip>. - Kernel-level resume (
bhyvectl —resume→ vCPU runtime advances): 17 ms, measured separately in snapshot-cloning. The two numbers measure different things: 17 ms is the VMM latency; 147 ms is the full REST-ish path including SSH handshake and a few poll iterations.
Gotchas we hit
- PerSourcePenalties in OpenSSH 9.8+ bans the source IP after
a few failed auth attempts and returns “Not allowed at this
time”. The warm path SIGSTOPs mid-SSH and the probe loop
hammers connects — enough to trip the penalty counter. Disabled
in the image’s
sshd_config. Without this fix, the second smoke-rig run fails with a spurious “connect: Permission denied” at the TCP layer. - First-available ordering:
checkoutusesgrep -m1 available— a sequential return+checkout gets the same entry back. The smoke rig’s concurrency check needs to run while the first entry is stillin-use, not afterreturn. The rig was reshaped accordingly; the pool-ctl behavior is correct. - No startup reconciliation: if honor reboots with live pool entries, the state file and the IpAllocator forget the bhyve processes. Handled the same way the jail path does it — rare in practice, drain-then-warm recovers. TODO shared with the jail backend.
What this does not cover
- The Rust
BhyveBackendine2b-compat— Session B’s job, separate. Once wired, the gateway’sPOST /sandboxescan target this substrate via a template flag. - Per-entry ZFS snapshot rollback (instead of destroy+respawn on
return) — Session C. - A
bhyve-poolmise task and a CI gate for the image-build step. Left for a follow-up.