What CubeSandbox Actually Is

Read from the source, not the README.

The five-word pitch is “Firecracker-class sandbox service.” That’s both sort of right and meaningfully wrong. Read from the source, CubeSandbox is a composition — three mature upstream projects, two home-grown layers, some tuning. The novelty is the composition and the operational story, not a new VMM.

The upstreams

Three pieces were not written for CubeSandbox.

The “dedicated guest kernel per agent” claim is literally accurate: each sandbox gets its own cube-hypervisor process, which creates its own KVM microVM with its own vCPUs and its own guest kernel. This is a meaningfully different isolation model from Docker (shared host kernel, namespace partitioning) and from FreeBSD jails (shared host kernel, jail partitioning). It is not a meaningfully different isolation model from Firecracker, Kata-on-CH, or any other Cloud-Hypervisor-on-KVM deployment — Cube’s wording implies novelty there, and there isn’t any.

The new pieces

Two components are native to CubeSandbox.

CubeAPI — the E2B gateway

A Rust Axum service. One binary, under 500 lines of routing glue.

CubeAPI/src/routes.rs L26-44 @c439bb5 rust
pub fn build_router(state: AppState) -> Router {
    let sandbox_routes = Router::new()
        .route("/sandboxes", get(sandboxes::list_sandboxes))
        .route("/sandboxes", post(sandboxes::create_sandbox))
        .route("/v2/sandboxes", get(sandboxes::list_sandboxes_v2))
        .route("/sandboxes/:sandboxID", get(sandboxes::get_sandbox))
        .route("/sandboxes/:sandboxID", delete(sandboxes::kill_sandbox))
        .route("/sandboxes/:sandboxID/pause", post(sandboxes::pause_sandbox))
        .route("/sandboxes/:sandboxID/resume", post(sandboxes::resume_sandbox))
        .route("/sandboxes/:sandboxID/connect", post(sandboxes::connect_sandbox))
        // ...and seven more, some of which return 501.
}
The canonical list of E2B-compatible endpoints

The CubeAPI/README.md support matrix is disarmingly honest: nine of seventeen E2B endpoints are marked ✅, the rest ❌. The ✅ set is the sandbox lifecycle — create, list, get, kill, pause, resume, connect. The ❌ set is logs, metrics, persistent snapshots, TTL management, per-sandbox network updates. For the common E2B agent flow (Sandbox.create → run_code → close) the drop-in claim is true. For anything observability-adjacent, it isn’t yet.

CubeNet / cubevs — the eBPF fabric

Three BPF programs, written in C, compiled against CubeNet/vmlinux/x86/vmlinux.h:

Go glue in CubeNet/cubevs/*.go (via cilium/ebpf, with bpf2go-generated bindings *_x86_bpfel.{go,o}) compiles, attaches, and maintains the BPF maps that drive policy. See cubevs/netpolicy.go, snat.go, port.go, tap.go, tc.go, reaper.go.

This is the component that doesn’t port cleanly to FreeBSD — a whole appendix is dedicated to what an honest pf-and-VNET stand-in would look like.

The orchestration glue

Cubelet + CubeMaster are Go. They are shaped like kubelet + a master — Cubelet is the node-local reconciler, CubeMaster the cluster coordinator. Dependencies give the shape away: containerd/ttrpc, cilium/ebpf, opencontainers/runtime-spec, prometheus/procfs, MySQL, Redis. network-agent is a newer per-node service that sits between Cubelet and CubeNet/cubevs and provides EnsureNetwork / ReleaseNetwork RPCs — a seam that will matter for the port sketch.

The Cubelet/pkg/allocator/ package is the one that earns the sub-60ms number. It’s the pre-warm pool: a population of already-booted-and-paused VMs sitting ready, so “create” becomes “pick one, resume its memory snapshot, reconfigure network, return.” This is the trick; see /claims for the skepticism.

Putting it together

Figure · Control-plane + data-plane map

E2B SDK cliente2b-code-interpreteredgeCubeProxy(nginx+lua)control planeCubeAPI(axum, E2B)CubeMaster(go, sched)nodeCubelet(go, node agent)network-agent(go, RPC)CubeNet / cubevs(3 eBPF programs)CubeShim(rust, shim v2)cube-hypervisor(fork of Cloud Hypervisor v28)KVM + virtio + snapshot APIMicroVMdedicated guest kernelvsock ⇄ ttrpc to shimcube-agent(fork of Kata)PID 1 / rustjailLinux host kernelKVM ioctls · cgroups v2 · eBPF/TC · netlink · AF_VSOCK · seccomp-bpf · virtio-fs
Composition view. Boxes with dashed outline are forks of upstream projects.

Three things to notice on the diagram.

The dashed boxes are not ours. The VMM, the shim, and the guest agent are forks of projects that long predate CubeSandbox. Calling Cube “a new microVM stack” elides this. Calling it “a thoughtful composition tuned for agent workloads” is more accurate and, frankly, better PR for a piece of genuinely useful engineering.

The seam between Cubelet and CubeNet is where agent workloads get their network shape. The eBPF programs enforce isolation between sandboxes on the same host and to the host NIC; the shim doesn’t touch the network dataplane at all. This is why /appendix/ebpf-to-pf carries so much weight — replacing that single component is a significant portion of any FreeBSD port.

The kernel-surface bar at the bottom is Linux-only. KVM, cgroups v2, eBPF TC attach, netlink, AF_VSOCK, virtio-fs. Every one of those has a FreeBSD analog, but not every analog is equally mature. The bhyve port page and the caveats handle those one at a time.

The <5MB claim is the one that most directly tests one’s priors. There is no “aggressively stripped runtime” that explains it — cube-hypervisor is roughly the Cloud Hypervisor binary (a few MB resident on its own). The claim describes per-instance overhead, not total footprint, and it relies on CoW page sharing across identical guests at the host level. If you boot 1,000 microVMs from the same rootfs.img and the same kernel image, the text/rodata of the kernel and the shared pages of the guest userspace de-duplicate in the host’s page cache, so the marginal cost of the 1,001st VM is small. See /anatomy — snapshot cloning deep-dive for the mechanism and /claims for the methodology scrutiny.

What this implies for the FreeBSD pages

The port maps are best understood one component at a time, and they fall into three tiers.

Tier 1 — ports cleanly. CubeAPI (Axum is portable). CubeMaster (Go, generic RPC). CubeProxy (nginx+Lua). Cubelet’s state-machine and RPC surfaces (Go). These run unchanged; their Linux-specific plumbing (nsenter, cgroups accounting, systemd lifecycle) needs replacing, but the architecture holds.

Tier 2 — maps with a different kernel underneath. cube-hypervisor → bhyve. The “fair” port accepts giving up Cloud Hypervisor’s mature snapshot/restore API, seccomp-bpf host-process sandboxing (Capsicum instead), and rust-vmm as a dependency. The upside is a native FreeBSD VMM that doesn’t fight the host. See /essays/freebsd-bhyve.

Tier 3 — there’s no clean substitute. CubeNet’s eBPF dataplane. FreeBSD has pf + VNET + netgraph, but none of them give you “compile a BPF program, attach it to TC ingress, mutate maps from userspace as policy changes.” An honest port rebuilds the dataplane, not the control plane. See /appendix/ebpf-to-pf.

The port-sketch page walks the full decomposition. The running name for the FreeBSD-native rebuild is Coppice — the forestry term for many trees regrowing from one stump lines up with what our vmm-vnode patch does to a golden checkpoint.