eBPF on FreeBSD

FreeBSD has had a BPF since 1990 — Steven McCanne and Van Jacobson’s original packet filter, the one tcpdump still opens. That is not the thing people mean in 2026 when they say “eBPF.” This page is the honest audit: what of the Linux extended-BPF ecosystem has a FreeBSD counterpart, what doesn’t, and where the gap is load-bearing for the Coppice workload.

Two things, same three letters

Classic BPF (cBPF) lives in sys/net/bpf.c and is exposed as /dev/bpf. It’s a stack-machine packet classifier with a tiny fixed instruction set, used by tcpdump(1), by pf’s capture hooks, by dhclient, by ng_bpf in netgraph. FreeBSD’s bpf(4) man page describes this, not the Linux thing. It is mature, small, boring, and in-tree.

Extended BPF (eBPF) is a different machine: 64-bit registers, maps as first-class objects, a verifier that proves termination and memory safety at load, helper functions, program types bound to attach points (XDP, tc, tracepoint, kprobe, cgroup_skb, sched_ext…), JIT compilers per architecture, and a userspace ecosystem — libbpf, bpftool, CO-RE, bpftrace, Cilium — that treats the kernel’s BPF subsystem as a programmable substrate. None of that is in FreeBSD base in 2026.

So when a Cube engineer asks “does FreeBSD have BPF”, the answer is yes in the sense that tcpdump runs and no in the sense that nothing you write against libbpf will load.

The port attempts, and where they stand

The serious port attempt is Yutaro Hayakawa’s generic-ebpf, presented at BSDCan 2018 as eBPF Implementation for FreeBSD. The design decomposes a Linux-style eBPF runtime into layered components — interpreter, JIT, map subsystem, an ebpf_dev character device that stands in for Linux’s bpf(2) syscall — and implements each portably so the same runtime can live in FreeBSD kernel, Linux kernel, and userspace. Two shipped kernel modules: ebpf.ko (the runtime) and ebpf-dev.ko (the loader). A companion libbpf-freebsd port targets the same interface.

The unhappy fact, as of this writing, is that generic-ebpf’s dev branch last saw a commit on 2021-05-28. Five years of silence on a project whose author has since written more-interesting things (vale-bpf, Cilium-adjacent work) is, in the unwritten conventions of our tribe, the signal that a thing has stopped. It still builds on a FreeBSD 13.x head of the right vintage; it almost certainly does not build cleanly on 15.0 without work. We have not verified either direction, and the 2026 status should be treated as unmaintained, rebuildable with effort, not shippable.

There is no separate, in-tree ebpf.ko in FreeBSD base. bpf(4) is cBPF. The Linux-compat layer (linuxkpi, linuxulator) does not emulate bpf(2). The FreeBSD Foundation’s status reports for 2025-Q1 through 2025-Q3 carry no eBPF work items (Q3 2025 checked directly). The freebsd-net@ threads we found are the periodic “is anyone doing this?” ones, with answers that rhyme with the KSM ones in /appendix/ksm-equivalent — real interest, no shipped code, GPL concerns about lifting the Linux implementation wholesale.

The Matt Macy thread is harder to pin down. Macy is the author of the ZFS-on-Linux-to-FreeBSD ZoL consolidation; he has appeared in a few freebsd-net@ conversations about eBPF but we could not locate a corresponding tree or series. Mentioning him in the same sentence as “FreeBSD eBPF port” should be flagged unverified until somebody points to a branch.

XDP? No. What then?

XDP (eXpress Data Path) is a Linux-kernel construct by design: an eBPF program attached at the driver’s receive path, running before skb allocation, with verdicts XDP_PASS/XDP_DROP/XDP_TX/XDP_REDIRECT. It is cheap because it sidesteps the normal stack. There is no FreeBSD equivalent, full stop — not in name, not in “same-shape-different-API.”

What FreeBSD offers instead is netmap, Luigi Rizzo’s 2012 framework (netmap: A Novel Framework for Fast Packet I/O, USENIX ATC 2012). netmap maps NIC ring buffers into userspace, eliminates per-packet allocation and copies, batches syscalls, and delivers 14.88 Mpps on a single 900 MHz core on the original reference hardware. On modern 10/25/100 GbE NICs with proper driver support (ixgbe, ixl, ice, mlx5 among others) it routinely saturates line rate with tens of cores to spare. FreeBSD has shipped netmap in base since 11.0. sys/dev/netmap/ + sys/net/netmap*.

VALE (Rizzo & Lettieri, CoNEXT 2012) is the in-kernel switched-ethernet complement: virtual ports you can attach bhyve VMs, jails, or userspace programs to, with the same memory-mapped-ring discipline. man 4 vale.

Shape differences that matter for Coppice:

propertyXDPnetmap / VALEconsequence
program modelverified eBPF in kerneluserspace program + kernel fast pathnetmap logic is ordinary C, no verifier ceiling; you own the crash if it segfaults.
attach pointdriver RX, pre-stackNIC ring-level, stack bypassBoth sit before the normal stack; netmap is more radical — even the host stops seeing the packet unless you hand it back.
policy-map mutationBPF map update_elem, µsany userspace data structurenetmap gives you a C process; your “map” is whatever you allocate. No kernel round-trip, no verifier.
policy-program mutationreload verified program, msrebuild + relink netmap app, secondseBPF wins clearly here. XDP program reload is cheaper than rewriting, recompiling, and respawning a netmap consumer.
ifconfig surfaceip link set dev eth0 xdp obj prog.onm_open(“netmap:ix0”) from your processXDP is declarative attach; netmap is a capability you grab and hold.
latency through fast pathhundreds of ns per hopsimilar; a few µs via VALE portLoose, published figures. Our measured sandbox-to-sandbox TCP_RR through the regular pf/bridge path (no netmap) lands at 7 µs p50 — see /appendix/ebpf-to-pf for the full table.
maturity on FreeBSDN/Ain-tree since 11, used in productionpfSense/Netgate, Rubicon, iXsystems appliances all ship netmap paths.

XDP vs. netmap/VALE for the specific shape we care about: a per-interface programmable fast path with userspace-mutable policy state. Neither tool is “better” in the abstract; they optimize different axes of the same problem.

ifconfig ix0 on a netmap-attached interface looks unremarkable — the interface remains visible, up, with an address — until you notice that nothing is going through the host stack. A ping from the host while a netmap consumer holds the ring returns “no route to host” unless the consumer chooses to forward host-stack packets. That is the trap for people migrating from XDP, where the kernel still owns the default path.

There is one bridge project worth naming: vale-bpf, also by Hayakawa, lets you attach eBPF programs to VALE switch ports as classifiers, with generic-ebpf as the runtime. Benchmarks in the project README claim ~2% better than Linux’s XDP_REDIRECT_MAP and ~9% worse than raw VALE. Same caveats as its parent project: last active circa 2020, shippability in 2026 is an open question.

DTrace: the observability axis

The observability half of the eBPF story — bpftrace, bcc, Brendan Gregg’s flame-graph canon — has a FreeBSD counterpart that predates it: DTrace, ported from Solaris, in-tree since 9.0. For agent-sandbox observability work the two overlap heavily. It is worth being specific about where they don’t.

Where DTrace is stronger or equal: Where bpftrace is stronger:

For Coppice: DTrace is the right tool for “why is bhyve spending so much time in vm_fault” and for “instrument the pf anchor reload path.” We would miss bpftrace maybe twice a quarter — the moments where someone wants a one-liner flamegraph of a running production process. That’s a tolerable gap. (The AsiaBSDCon 2024 overhead paper has head-to-head numbers; we have not independently re-run them.)

The networking-policy angle — the one that pays the bills

This page exists because the Coppice dataplane runs Linux eBPF today and we have to answer: does the FreeBSD stack achieve policy-update parity? The full per-program translation lives in /appendix/ebpf-to-pf. Here we set the axes.

Coppice’s CubeNet stack uses three eBPF programs (nodenic, mvmtap, localgw) and a userspace agent that mutates BPF maps to change allow/deny lists, SNAT port allocations, host-port mappings. The stress case is thousands of map mutations per second with no dataplane hiccup.

The FreeBSD answer is a composite:

The honest verdict on policy-update parity:

So the bottom line for Coppice, carried forward from /appendix/ebpf-to-pf: yes at paper-parity for the policy shapes we actually use; no at the arbitrary-5-tuple-lookup-at-wire-speed shape we might want later.

Where the gap actually bites

Being honest about the things Linux eBPF does that FreeBSD doesn’t, in decreasing order of how much it hurts Coppice:

The pattern: eBPF is strongest at “kernel programmability as a platform feature.” FreeBSD trades that for “ship what you need in userspace with a well-designed bypass.” For Coppice’s shape — many microVMs, per-VM policy, policy mutations in the low thousands per second — the netmap-or-pf combination is enough. The measurements in /appendix/ebpf-to-pf landed where we expected: 7 µs p50 intra-sandbox RTT, 14.6 Gbit/s TCP, 250k policy-update ops/sec via atomic table replace. Parity, not promise.

References