This is a sketch, not code. The question it answers is: if you decided to ship a CubeSandbox-shape service on FreeBSD, what would you keep, what would you swap, and what would you have to actually build from scratch? The short answer — the agent/shim/VMM port is bounded and expensive; the eBPF dataplane port is not a port at all, it’s a redesign.
The decomposition
Table · CubeSandbox component → FreeBSD stand-in
| CubeSandbox component | Upstream | FreeBSD stand-in | Status | |
|---|---|---|---|---|
| CubeAPI | cube-api (Axum) | native (Rust/Axum portable) | e2b-compat Rust/Axum; SDK-verified● | DONE — 10/10 SDK calls pass● |
| Envd (run_code + files + commands) | Go binary inside guest with ipykernel | native | e2b-compat envd module: /execute streams NDJSON via jexec python● | MVP — stateless; ipykernel needed for state-persistence● |
| CubeMaster | cube-master (Go) | native (Go portable) | same module, unchanged≈ | untested (would port cleanly)≈ |
| Cubelet (node agent) | cubelet (Go) | native | port state-machine; replace Linux-specific plumbing≈ | ~1 week≈ |
| cube-hypervisor | Cloud Hypervisor v28 fork | Cloud Hypervisor (Linux/KVM only) | bhyve(8) with BHYVE_SNAPSHOT kernel option● | DONE — honor rebuilt; suspend/resume working● |
| Durable pool (snapshot-restore) | CH snapshot API | — | bhyvectl --suspend + bhyvectl --create + bhyve -r● | DONE — 17ms cc=1 hot hit; 58ms/entry refill● |
| cube-agent (guest) | Kata agent fork (Rust PID 1) | Kata agent, Linux-only | minimal in-guest REST server (or Linux guest)≈ | ~1 week (or N/A with Linux guests on bhyve)≈ |
| CubeNet (eBPF) | 3 eBPF programs + Go glue | Linux-only | pf tables + dummynet + VNET + netgraph≈ | months (real kernel work)≈ |
| Cross-guest memory dedup (KSM) | Linux KSM (ksmd + madvise) | Linux kernel feature | no FreeBSD equivalent shipping | REAL GAP — needs kernel work● |
| CubeProxy | nginx+lua | nginx | same nginx config | trivial≈ |
Read the matrix as a sort order. The “same afternoon” row tops are free. The “a month+” rows are where the real work lives. The last row of each group is the signal.
What maps cleanly (Tier 1)
CubeAPI, CubeMaster, CubeProxy. These are web-tier components — Axum,
Go HTTP, nginx+Lua. None of them touch Linux-specific kernel surfaces. The
E2B-compat axum handlers call through to CubeMaster over gRPC; CubeMaster
stores state in MySQL + Redis; CubeProxy parses Host headers and
proxies. Nothing here needs rewriting for FreeBSD.
The one real task is packaging: a poudriere port for cube-api and
cubemaster, rc.d scripts in /usr/local/etc/rc.d/, a default config in
/usr/local/etc/cube/, and log rotation via newsyslog(8) rather than
whatever logrotate convention the upstream uses. Boring, not hard.
What maps with a different kernel underneath (Tier 2)
Cubelet
Cubelet itself — the state machine, the reconcilers, the NBI gRPC handlers,
the allocator logic — is pure Go and portable. What isn’t portable is the
plumbing: pkg/nsenter, pkg/cubemnt, pkg/numa, pkg/sysctl, and the
implicit assumptions about cgroups v2 hierarchy.
The replacements:
pkg/nsenter(enters namespaces by PID) →jail_attach(2)via a small Go cgo shim. Different semantics: you attach to a named jail, not a set of namespaces. For most Cubelet call sites that’s actually cleaner.pkg/cubemnt(bind/overlay mounts for rootfs) →nmount(2)withnullfsfor bind,unionfsfor overlay. Caveats: unionfs on FreeBSD is less battle-tested than overlayfs on Linux. ZFS clones are the preferred substitute when the template rootfs is on ZFS.pkg/numa(pin vCPUs to NUMA nodes) →cpuset(1)+cpuset_setaffinity(2). Roughly 1:1 in capability.pkg/sysctl(tweak net tunables) → FreeBSD sysctls; same shape, different names.- Resource accounting (cgroups v2) →
rctl(8). The surface isn’t complete: cgroups v2 has per-cgroup IO weights, memory.swap accounting, pressure-stall signals, cpu.pressure.rctlhas most of the analogs but coverage isn’t perfect, and the reporting APIs differ.
CubeShim
CubeShim is Rust. The Shim v2 framework crate (containerd-shim-rs) is
Linux-only today. This is the biggest meaningful port task on the control
side: making the shim framework compile and run on FreeBSD. The surface
that needs attention:
- Process supervision. Linux-specific
prctl(PR_SET_CHILD_SUBREAPER)becomesprocctl(PROC_REAP_ACQUIRE)on FreeBSD — same concept, different interface. - Signal / pid primitives. Mostly portable through
nix(3). - Unix-socket control plane. Portable.
- vsock to the guest. This is the other meaningful question.
FreeBSD’s bhyve has a virtio-vsock device (bhyve_vsock) but it’s less
mature than KVM’s. On older FreeBSD releases, the shim would have to talk
to the guest over a virtio-console or virtio-net channel instead. This
changes the boot flow (no vsock CID for the guest to bind to).
cube-hypervisor → bhyve
The apples-to-apples hypervisor port. Accept the gaps:
- Memory snapshot/restore lives behind
BHYVE_SNAPSHOT— still upstream-experimental, off in GENERIC. We build it anyway (see/appendix/bench-rigfor the recipe) and run the production shape asbhyve-durable-prewarm-pool: durable on-disk ckps + SIGSTOP’d hot tier. 17 ms resume, beats Cube’s 60 ms claim. The “workaround” framing that used to live here is gone — the path is real, just not in a stock kernel. - No live migration. CubeSandbox doesn’t advertise live migration, so this doesn’t matter for feature parity.
- Different device-emulation set. bhyve’s virtio-blk/net/9p/console cover the common cases. virtio-fs is not in base as of FreeBSD 15.0; 9p-over-virtio is the nearest alternative and is a different guest contract.
The hypervisor client inside CubeShim (CubeShim/shim/src/hypervisor/)
that today talks to Cloud Hypervisor’s HTTP API would instead drive bhyve(8)
directly via libvmmapi (the FreeBSD VMM library). That’s a rewrite of that
one module; the surface it exposes upward is unchanged.
cube-agent (guest)
This is where you get to pick a threat model.
Path A — keep the guest agent. Port cube-agent (Kata fork) to work
inside a FreeBSD-based guest kernel. The agent does PID 1 + rustjail +
vsock-ttrpc. rustjail is Linux-OCI-specific (cgroups, namespaces);
replacing it with a FreeBSD jail-aware OCI runtime is work, but you retain
the full CubeSandbox architecture inside the VM.
Path B — skip it. If you’re on the FreeBSD jails path (/essays/freebsd-jails),
there’s no guest kernel, so there’s no guest agent. Cubelet drives the jail
directly. This is simpler but abandons the microVM isolation model.
Path C — Linux guests. Keep cube-agent as-is. Host is FreeBSD (bhyve); guests are Linux. This is how many FreeBSD bhyve deployments already run; you inherit all the Cube tuning. Downside: you’ve just bought a Linux userspace for your “FreeBSD” sandbox service.
What has no clean substitute (Tier 3)
CubeNet
Three eBPF programs, attached to TC clsact ingress/egress hooks, driving
policy stored in BPF maps, mutated from Go userspace via the cilium/ebpf
library. There is no direct FreeBSD equivalent.
What FreeBSD has:
- pf(4) — packet filter with NAT and stateful connections; tables for dynamic set-membership; anchors for per-jail scoping.
- dummynet(4) — pipes and queues for per-flow rate shaping.
- VNET — per-jail network stacks.
- epair(4) — veth-equivalent.
- netgraph(4) — user-programmable packet graph; the closest thing to programmable dataplane, but its model is “assemble a graph of kernel nodes” not “attach a BPF program to a TC hook.”
A cubevs-shaped component on FreeBSD would:
- Create one TAP per sandbox (in a per-jail VNET).
- Put them behind a
bridge(4)or build a netgraph bridge with policy nodes between each TAP and the host NIC. - Express per-sandbox egress policy as pf anchors keyed by TAP name,
updated via
pfctl -fon policy change. - Use pf tables for set-membership (allowed destination prefixes, blocked sets) — this gives you fast membership updates without a full ruleset reload.
- Host-port mapping via
pf rdr pass. - SNAT via
pf nat.
The architectural question is how fast policy can change. eBPF maps mutate
per-entry in microseconds. pf tables mutate per-entry in microseconds too.
pf rulesets reload in milliseconds, which is fine for policy changes that
happen seconds apart but not for per-flow dynamic policy. Whether this
matters depends on what CubeNet actually does with the maps (a question
worth answering in /appendix/ebpf-to-pf).
What you don’t get:
- XDP-speed fast path. No equivalent ingress-before-stack hook.
- Arbitrary packet mutation in kernel from userspace updates without module reload. Netgraph gives you some of this if you write custom nodes, but the ergonomics are different.
- BPF map observability from userspace for debugging.
bpftool map dumpequivalents are not there.
Effort summary
Rough brackets, in FTE-weeks for someone already fluent in both platforms:
| Tier | Components | FTE-weeks |
|---|---|---|
| 1 | CubeAPI, CubeMaster, CubeProxy | <1 |
| 2 | Cubelet, CubeShim, cube-hypervisor, cube-agent | 6–10 |
| 3 | CubeNet + network-agent dataplane | 12+ (scope-uncertain) |
The honest read: an individual could ship Tier 1 in a weekend, Tier 2 in a quarter, and Tier 3 is a project with its own charter. The rhetorical move of calling this “a port” breaks down somewhere between Tier 2 and Tier 3 — at some point you stop porting CubeSandbox and start building something that happens to speak E2B.
What this tells us about threat-model decisions
If your threat model accepts shared-kernel isolation (which most
intra-organization agent workloads do in practice, even when they shouldn’t),
the jails path from /essays/freebsd-jails skips
Tier 2 and Tier 3 entirely. You get to ship.
If your threat model requires dedicated guest kernels, you’re signing up for Tier 2 at minimum, and Tier 3 to get parity on the network-isolation promise that CubeSandbox makes. The interesting design question is whether a partial Tier 3 (pf tables keyed by sandbox, static rulesets per tenant) is honest enough for the workloads you actually run.