If You Were Actually Building This

A FreeBSD port sketch, component by component.

This is a sketch, not code. The question it answers is: if you decided to ship a CubeSandbox-shape service on FreeBSD, what would you keep, what would you swap, and what would you have to actually build from scratch? The short answer — the agent/shim/VMM port is bounded and expensive; the eBPF dataplane port is not a port at all, it’s a redesign.

The decomposition

Table · CubeSandbox component → FreeBSD stand-in

CubeSandbox componentUpstreamFreeBSD stand-inStatus
CubeAPI cube-api (Axum) native (Rust/Axum portable) e2b-compat Rust/Axum; SDK-verified DONE — 10/10 SDK calls pass
Envd (run_code + files + commands) Go binary inside guest with ipykernel native e2b-compat envd module: /execute streams NDJSON via jexec python MVP — stateless; ipykernel needed for state-persistence
CubeMaster cube-master (Go) native (Go portable) same module, unchanged untested (would port cleanly)
Cubelet (node agent) cubelet (Go) native port state-machine; replace Linux-specific plumbing ~1 week
cube-hypervisor Cloud Hypervisor v28 fork Cloud Hypervisor (Linux/KVM only) bhyve(8) with BHYVE_SNAPSHOT kernel option DONE — honor rebuilt; suspend/resume working
Durable pool (snapshot-restore) CH snapshot API bhyvectl --suspend + bhyvectl --create + bhyve -r DONE — 17ms cc=1 hot hit; 58ms/entry refill
cube-agent (guest) Kata agent fork (Rust PID 1) Kata agent, Linux-only minimal in-guest REST server (or Linux guest) ~1 week (or N/A with Linux guests on bhyve)
CubeNet (eBPF) 3 eBPF programs + Go glue Linux-only pf tables + dummynet + VNET + netgraph months (real kernel work)
Cross-guest memory dedup (KSM) Linux KSM (ksmd + madvise) Linux kernel feature no FreeBSD equivalent shipping REAL GAP — needs kernel work
CubeProxy nginx+lua nginx same nginx config trivial
After the durable-bhyve and e2b-compat work, only two real gaps remain: CubeNet eBPF → pf (rebuild the dataplane), and KSM-equivalent memory dedup (new kernel feature).

Read the matrix as a sort order. The “same afternoon” row tops are free. The “a month+” rows are where the real work lives. The last row of each group is the signal.

What maps cleanly (Tier 1)

CubeAPI, CubeMaster, CubeProxy. These are web-tier components — Axum, Go HTTP, nginx+Lua. None of them touch Linux-specific kernel surfaces. The E2B-compat axum handlers call through to CubeMaster over gRPC; CubeMaster stores state in MySQL + Redis; CubeProxy parses Host headers and proxies. Nothing here needs rewriting for FreeBSD.

The one real task is packaging: a poudriere port for cube-api and cubemaster, rc.d scripts in /usr/local/etc/rc.d/, a default config in /usr/local/etc/cube/, and log rotation via newsyslog(8) rather than whatever logrotate convention the upstream uses. Boring, not hard.

What maps with a different kernel underneath (Tier 2)

Cubelet

Cubelet itself — the state machine, the reconcilers, the NBI gRPC handlers, the allocator logic — is pure Go and portable. What isn’t portable is the plumbing: pkg/nsenter, pkg/cubemnt, pkg/numa, pkg/sysctl, and the implicit assumptions about cgroups v2 hierarchy.

The replacements:

CubeShim

CubeShim is Rust. The Shim v2 framework crate (containerd-shim-rs) is Linux-only today. This is the biggest meaningful port task on the control side: making the shim framework compile and run on FreeBSD. The surface that needs attention:

FreeBSD’s bhyve has a virtio-vsock device (bhyve_vsock) but it’s less mature than KVM’s. On older FreeBSD releases, the shim would have to talk to the guest over a virtio-console or virtio-net channel instead. This changes the boot flow (no vsock CID for the guest to bind to).

cube-hypervisor → bhyve

The apples-to-apples hypervisor port. Accept the gaps:

The hypervisor client inside CubeShim (CubeShim/shim/src/hypervisor/) that today talks to Cloud Hypervisor’s HTTP API would instead drive bhyve(8) directly via libvmmapi (the FreeBSD VMM library). That’s a rewrite of that one module; the surface it exposes upward is unchanged.

cube-agent (guest)

This is where you get to pick a threat model.

Path A — keep the guest agent. Port cube-agent (Kata fork) to work inside a FreeBSD-based guest kernel. The agent does PID 1 + rustjail + vsock-ttrpc. rustjail is Linux-OCI-specific (cgroups, namespaces); replacing it with a FreeBSD jail-aware OCI runtime is work, but you retain the full CubeSandbox architecture inside the VM.

Path B — skip it. If you’re on the FreeBSD jails path (/essays/freebsd-jails), there’s no guest kernel, so there’s no guest agent. Cubelet drives the jail directly. This is simpler but abandons the microVM isolation model.

Path C — Linux guests. Keep cube-agent as-is. Host is FreeBSD (bhyve); guests are Linux. This is how many FreeBSD bhyve deployments already run; you inherit all the Cube tuning. Downside: you’ve just bought a Linux userspace for your “FreeBSD” sandbox service.

What has no clean substitute (Tier 3)

CubeNet

Three eBPF programs, attached to TC clsact ingress/egress hooks, driving policy stored in BPF maps, mutated from Go userspace via the cilium/ebpf library. There is no direct FreeBSD equivalent.

What FreeBSD has:

A cubevs-shaped component on FreeBSD would:

  1. Create one TAP per sandbox (in a per-jail VNET).
  2. Put them behind a bridge(4) or build a netgraph bridge with policy nodes between each TAP and the host NIC.
  3. Express per-sandbox egress policy as pf anchors keyed by TAP name, updated via pfctl -f on policy change.
  4. Use pf tables for set-membership (allowed destination prefixes, blocked sets) — this gives you fast membership updates without a full ruleset reload.
  5. Host-port mapping via pf rdr pass.
  6. SNAT via pf nat.

The architectural question is how fast policy can change. eBPF maps mutate per-entry in microseconds. pf tables mutate per-entry in microseconds too. pf rulesets reload in milliseconds, which is fine for policy changes that happen seconds apart but not for per-flow dynamic policy. Whether this matters depends on what CubeNet actually does with the maps (a question worth answering in /appendix/ebpf-to-pf).

What you don’t get:

Effort summary

Rough brackets, in FTE-weeks for someone already fluent in both platforms:

TierComponentsFTE-weeks
1CubeAPI, CubeMaster, CubeProxy<1
2Cubelet, CubeShim, cube-hypervisor, cube-agent6–10
3CubeNet + network-agent dataplane12+ (scope-uncertain)

The honest read: an individual could ship Tier 1 in a weekend, Tier 2 in a quarter, and Tier 3 is a project with its own charter. The rhetorical move of calling this “a port” breaks down somewhere between Tier 2 and Tier 3 — at some point you stop porting CubeSandbox and start building something that happens to speak E2B.

What this tells us about threat-model decisions

If your threat model accepts shared-kernel isolation (which most intra-organization agent workloads do in practice, even when they shouldn’t), the jails path from /essays/freebsd-jails skips Tier 2 and Tier 3 entirely. You get to ship.

If your threat model requires dedicated guest kernels, you’re signing up for Tier 2 at minimum, and Tier 3 to get parity on the network-isolation promise that CubeSandbox makes. The interesting design question is whether a partial Tier 3 (pf tables keyed by sandbox, static rulesets per tenant) is honest enough for the workloads you actually run.