This exists because the original spec kept writing “rust-vmm / KVM microVMs”, which is imprecise. cube-hypervisor is a fork of Cloud Hypervisor (v28), a specific VMM built on rust-vmm crates. The distinction matters for the comparison.
What rust-vmm is
A project housing crates shared across Rust VMMs — Firecracker, Cloud Hypervisor, crosvm. Each VMM picks crates it needs and builds its own device model, API, and lifecycle on top. The crates are small and focused; the decisions about how to compose them are not.
Relevant crates used by hypervisor/:
kvm-ioctls— safe wrappers aroundKVM_CREATE_VM,KVM_CREATE_VCPU,KVM_SET_USER_MEMORY_REGION,KVM_RUN,KVM_IRQ_LINE,KVM_GET_SREGSet al.kvm-bindings— rawioctlstructs fromlinux/kvm.h.vm-memory— guest memory abstraction (GuestMemoryMmap, regions, slots).vm-fdt— device-tree builder (arm64; not the hot path on x86_64).linux-loader— boot protocols (ELF, bzImage PVH).virtio-queue+virtio-bindings— virtio device common code.vhost/vhost-user-*— vhost protocol and vhost-user frontends (userspace device offload).vmm-sys-util—EventFd, signalfd, timerfd helpers.seccompiler— compile seccomp-bpf programs at runtime.
What Cloud Hypervisor is
A production VMM that uses rust-vmm crates plus substantial original
code: a device model spanning virtio-net, -blk, -fs, -rng, -balloon,
-console, -vsock; a PCI root complex with passthrough (VFIO); a vhost-user
frontend layer; an HTTP control API (vmm-service); a snapshot/restore
implementation; ACPI tables; SGX EPC regions; vDPA support. Roughly 100k
lines of Rust in the hypervisor/ tree of the CubeSandbox repo.
Upstream: cloud-hypervisor/cloud-hypervisor. Cube’s fork identifies as
cube-hypervisor v28.0.0 in Cargo.toml. The upstream license headers
(Apache-2.0 & BSD-3-Clause) are preserved.
What Firecracker is (for comparison)
A minimal VMM built directly on rust-vmm. Intentionally limited device set (virtio-net, -blk, -vsock, serial), no PCI, no ACPI, no passthrough, no live migration. Aimed at Lambda-style workloads — millisecond boot, small attack surface. The “Firecracker” mental model is closer to “strip rust-vmm to the minimum and go fast.”
Why the distinction matters
Cloud Hypervisor is feature-rich. It has snapshot/restore, PCI passthrough, virtio-fs, vhost-user, vDPA — capabilities that meaningfully shape what a FreeBSD equivalent must provide to match. bhyve has most of these, but:
| Feature | Cloud Hypervisor 28 | bhyve 15.0 | Gap |
|---|---|---|---|
| Memory snapshot/restore | Stable, HTTP API | BHYVE_SNAPSHOT experimental | Material |
| virtio-fs | In-tree | Not in base | 9p is the alternative |
| vhost-user | Full | Not in base | Need userspace equivalent |
| PCI passthrough | VFIO | ppt(4) | Comparable |
| vDPA | In-tree | Not in base | |
| ACPI tables | Generated in-VMM | Minimal | OK for Linux guests; xBSD guests okay |
| seccomp-bpf sandbox | Yes | N/A — use Capsicum | Capsicum arguably stronger |
A comparison with “Firecracker” would miss most of these — Firecracker is a simpler target. Cube specifically picked Cloud Hypervisor, so a fair FreeBSD port has to match Cloud Hypervisor’s capability set, not Firecracker’s.
Files worth reading
hypervisor/Cargo.toml— lines 1-30: crate metadata, thecube-hypervisorname + v28 + Cloud Hypervisor upstream identification.hypervisor/src/main.rs— lines 1-60: VMM process entry, seccomp install, control-plane bootstrap.hypervisor/hypervisor/src/kvm/— the KVM backend.hypervisor/hypervisor/src/vm.rs—Vmabstraction across KVM/MSHV.hypervisor/vmm/src/api/— HTTP control-plane handlers (where/vm.snapshot,/vm.restore,/vmm.pinglive).
Open questions
- How much has CubeSandbox diverged from upstream Cloud Hypervisor? A
git log upstream..cube-hypervisor --staton the hypervisor subtree would answer this; we haven’t run it yet. - What’s the actual memory cost of cube-hypervisor at idle? We don’t have a Linux+Cube measurement box; our bhyve equivalent on honor is ~13 MiB process RSS per idle VM with the vmm-vnode patch — see the homepage density table for the host-level picture.