All numbers, one page

Every Coppice claim on the rest of the site traces back to one row on this page. Every row traces back to a script under benchmarks/rigs/. Every number traces back to a file under benchmarks/results/. If you don’t see the number here, we haven’t run it. If you see it here with a dash, the rig exists but hasn’t captured yet. Updated as benchmarks land.

Host

All FreeBSD numbers on this page measured on honor:

CPUAMD Ryzen 9 5900HX (8c/16t Zen 3, laptop APU)
RAM32 GB DDR4-3200 non-ECC
Storagesingle NVMe, GELI-encrypted, ZFS
OSFreeBSD 15.0-RELEASE-p4
Kernelcustom SNAPSHOT: GENERIC + options BHYVE_SNAPSHOT + patches/vmm-memseg-vnode.diff + patches/bhyve-vnode-restore.diff
pfenabled, net.link.bridge.pfil_member=1, set limit anchors 4096

Not bare-metal parity with Cube’s undisclosed benchmark box — that’s the point of the homepage’s “stricter clock, smaller box” framing.

Cold-start latency — bhyve configurations

Chart · cc=1 resume / boot latency — log scale

10201002001s2s↑ mean ms · logfull guestdurable pooldurable + prewarmpre-warm pool3.9s271ms17ms10ms
Four production-candidate configurations, log y. Tencent's advertised 60 ms is the dashed reference.

▸ reproduce ·  mise run bench:bhyve-full ·mise run bench:bhyve-durable-pool ·mise run bench:bhyve-durable-prewarm-pool ·mise run bench:bhyve-prewarm-pool · methodology

configcc=1 meancc=10 meancc=50 p95rig
bhyve-full (cold boot)3 906 ms5 846 ms100 194 msFull GENERIC guest. Upper bound; no one ships this.
bhyve-prewarm-pool10 ms10 ms41 msSIGSTOP’d live VMs. Process-level suspend/resume; not durable across host reboot.
bhyve-durable-pool271 ms1 565 ms3 103 msbhyvectl —suspend → disk → bhyve -r. Durable across reboot.
bhyve-durable-prewarm-pool17 ms39 ms1 290 ms ‡Two-tier: durable on-disk ckps as cold, SIGSTOP’d hot tier in front. Production shape. ‡ cc=50 is dominated by 50 guests sharing 16 physical threads — SIGCONT delivers immediately, but each vCPU waits for a time-slice. Latency degrades smoothly from cc=10 (105 ms mean) → cc=20 (333 ms) → cc=50 (995 ms). On a 32-thread host this falls back into the cc=10 band. Early measurement reported 2 903 ms p95; that was the poll-loop competing with itself (fixed by adding a 1 ms delay between bhyvectl probes). See parity-gaps.

All four are resume-from-ready; cc is concurrent creates. Full rig recipe + SNAPSHOT kernel build at /appendix/bench-rig.

Density — N microVMs from one checkpoint, with vmm-vnode patch

concurrent VMshost Δper-VM effectivenotes
8 × 256 MiB103 MiB12.9 MiBTemplate in page cache once; per-VM is bhyve state.
50 × 256 MiB939 MiB18 MiBOriginal KSM-gap sample; now ensemble-matched.
200 × 256 MiB5 366 MiB26 MiB16 threads saturating; cp cost visible.
400 × 256 MiB6 492 MiB16 MiBFixed cost amortizing.
1 000 × 256 MiB9 117 MiB9 MiBNaive: ~250 GiB. Laptop fit.

bhyve-fanout-rss.sh. Mechanism: /appendix/vmm-vnode-patch.

Network isolation — cubenet on pf

metriccubenet (honor)Cube/eBPF (typical)rig
sandbox↔sandbox p50 RTT7 µs~5–10 µsnetperf TCP_RR; run-net-bench.sh
sandbox↔sandbox p99 RTT8 µs~10–15 µsstddev 0.51 µs
TCP throughput, 1 stream14.6 Gbit/s~15–20 Gbit/siperf3 intra-host, memory-bandwidth-bound
Policy update, single add4.2 ms wall1–5 ms (bpftool)dominated by pfctl spawn; kernel-side is µs
Policy update, 1000 IPs batched4 ms totalCilium bulk: similarpfctl -T replace; 250k ops/sec effective
Policy mutation under 14 Gbit/s+43 µs p99 vs idleCilium: ~similarno visible contention; throughput stable. policy-churn-under-load.sh
Per-sandbox anchors, N=10001.5 ms p95 loadCilium: flatrequires set limit anchors 4096; see /appendix/ebpf-to-pf. policy-anchor-churn.sh
External rdr/DNAT add-latency+0.24 µs p50Cilium: similar9.87 vs 9.63 µs bare s2s TCP_RR. ext-to-sandbox.sh
IPv6 sandbox↔sandbox p50 RTT8 µs~5–10 µsnetperf -6 TCP_RR; tied with v4 (8 µs). run-net-bench-v6.sh
IPv6 TCP throughput, 1 stream6.19 Gbit/s~15–20 Gbit/s84% of v4 on same host state; v6 header + extra rule-block fixed cost. Dual-stack via fd77::/64 ULA; NAT66 on re0 for egress.
IPv6 external egress (NAT66)23.8 msn/asbx-a → 2606:4700:4700::1111 vs 23.5 ms host-direct; NAT66 add-latency below sample noise.
Multi-stream TCP scalingpending T2linear1-stream 14.6 Gbit/s → 16-stream 9 Gbit/s observed; attribution TBD
Per-sandbox rate limit (dummynet)pending T2Cilium: bandwidth managerrate-limit-dummynet.sh

Two VNET jails on cubenet0 + cube_policy anchor. Full methodology + gotchas at /appendix/ebpf-to-pf.

Lifecycle — per-sandbox provisioning

operationmeanp95notes
checkout (IP + tap + anchor + pool entry)21.5 ms23 msPer-sandbox anchor load dominates.
release (tap destroy + anchor flush + state kill)308 ms610 msifconfig tap destroy bound; pf is µs.
pf states leaked after release-all0 / 10End-to-end correctness, not just latency.

N=10 e2e run of pool-cubenet-e2e.sh using coppice-pool-ctl.

E2B compatibility surface

endpoint classstatusnotes
API server (CRUD + pause/resume + metrics + timeout)10/10 SDK calls passPython e2b-code-interpreter SDK against our Rust/Axum gateway.
envd /execute (run_code)NDJSON streamsjexec python3 backend; print(1+1) returns 2. See /appendix/run-code-protocol.
<port>-<id>.domain routingverifiedpf rdr + dnsmasq + Go cubeproxy; LAN-peer curl returns 200.
Filesystem API (sandbox.files.*)pendingUpstream envd REST; ~1 day of work.
Persistent kernel (ipykernel state across calls)7/7 SDK checks passipykernel in jail + in-jail bridge translates iopub → NDJSON. x = 42 persists; pandas → text/html; matplotlib → image/png; errors → name/value/traceback. Rig: benchmarks/rigs/jupyter-e2e.sh. Minimal reproductions under examples/ (0104, 07). See /appendix/run-code-protocol.

Kernel patches

patchstatereceipt
vmm-memseg-vnode.diffworking, measuredN=1000 density above; full audit at upstream-review.md
bhyve-vnode-restore.diffworking, measuredbhyve userland integration; -o snapshot.vnode_restore=true

What isn’t on this page

See /appendix/parity-gaps for everything still open.