FreeBSD has had a BPF since 1990 — Steven McCanne and Van
Jacobson’s original packet filter, the one tcpdump still
opens. That is not the thing people mean in 2026 when they say
“eBPF.” This page is the honest audit: what of the Linux
extended-BPF ecosystem has a FreeBSD counterpart, what doesn’t, and
where the gap is load-bearing for the Coppice workload.
Two things, same three letters
Classic BPF (cBPF) lives in sys/net/bpf.c and is exposed as
/dev/bpf. It’s a stack-machine packet classifier with a
tiny fixed instruction set, used by tcpdump(1), by
pf’s capture hooks, by dhclient, by
ng_bpf in netgraph. FreeBSD’s bpf(4) man page
describes this, not the Linux thing. It is mature, small, boring, and
in-tree.
Extended BPF (eBPF) is a different machine: 64-bit registers, maps as
first-class objects, a verifier that proves termination and memory
safety at load, helper functions, program types bound to attach points
(XDP, tc, tracepoint, kprobe, cgroup_skb, sched_ext…), JIT compilers
per architecture, and a userspace ecosystem — libbpf,
bpftool, CO-RE, bpftrace, Cilium — that
treats the kernel’s BPF subsystem as a programmable substrate. None
of that is in FreeBSD base in 2026.
So when a Cube engineer asks “does FreeBSD have BPF”, the
answer is yes in the sense that tcpdump runs and no in the sense
that nothing you write against libbpf will load.
The port attempts, and where they stand
The serious port attempt is Yutaro Hayakawa’s
generic-ebpf,
presented at BSDCan 2018 as eBPF Implementation for FreeBSD. The design decomposes a Linux-style
eBPF runtime into layered components — interpreter, JIT, map subsystem,
an ebpf_dev character device that stands in for Linux’s
bpf(2) syscall — and implements each portably so the same
runtime can live in FreeBSD kernel, Linux kernel, and userspace. Two
shipped kernel modules: ebpf.ko (the runtime) and
ebpf-dev.ko (the loader). A companion
libbpf-freebsd
port targets the same interface.
The unhappy fact, as of this writing, is that generic-ebpf’s
dev branch last saw a commit on 2021-05-28. Five years of
silence on a project whose author has since written more-interesting
things (vale-bpf, Cilium-adjacent work) is, in the unwritten
conventions of our tribe, the signal that a thing has stopped. It
still builds on a FreeBSD 13.x head of the right vintage; it almost
certainly does not build cleanly on 15.0 without work. We have not
verified either direction, and the 2026 status should be treated as
unmaintained, rebuildable with effort, not shippable.
There is no separate, in-tree ebpf.ko in FreeBSD base.
bpf(4) is cBPF. The Linux-compat layer
(linuxkpi, linuxulator) does not emulate
bpf(2). The FreeBSD Foundation’s status reports for
2025-Q1 through 2025-Q3 carry no eBPF work items
(Q3
2025 checked directly). The freebsd-net@ threads we found are the
periodic “is anyone doing this?” ones, with answers that rhyme
with the KSM ones in /appendix/ksm-equivalent
— real interest, no shipped code, GPL concerns about lifting the Linux
implementation wholesale.
The Matt Macy thread is harder to pin down. Macy is the author of the ZFS-on-Linux-to-FreeBSD ZoL consolidation; he has appeared in a few freebsd-net@ conversations about eBPF but we could not locate a corresponding tree or series. Mentioning him in the same sentence as “FreeBSD eBPF port” should be flagged unverified until somebody points to a branch.
XDP? No. What then?
XDP (eXpress Data Path) is a Linux-kernel construct by design: an
eBPF program attached at the driver’s receive path, running
before skb allocation, with verdicts
XDP_PASS/XDP_DROP/XDP_TX/XDP_REDIRECT.
It is cheap because it sidesteps the normal stack. There is no
FreeBSD equivalent, full stop — not in name, not in
“same-shape-different-API.”
What FreeBSD offers instead is netmap, Luigi Rizzo’s
2012 framework (netmap: A Novel Framework for Fast Packet I/O, USENIX ATC 2012).
netmap maps NIC ring buffers into userspace, eliminates per-packet
allocation and copies, batches syscalls, and delivers 14.88 Mpps on a
single 900 MHz core on the original reference hardware. On modern
10/25/100 GbE NICs with proper driver support
(ixgbe, ixl, ice,
mlx5 among others) it routinely saturates line rate with
tens of cores to spare. FreeBSD has shipped netmap in base since 11.0.
sys/dev/netmap/ + sys/net/netmap*.
VALE (Rizzo & Lettieri, CoNEXT 2012) is the
in-kernel switched-ethernet complement: virtual ports you can attach
bhyve VMs, jails, or userspace programs to, with the same
memory-mapped-ring discipline. man 4 vale.
Shape differences that matter for Coppice:
| property | XDP | netmap / VALE | consequence |
|---|---|---|---|
| program model | verified eBPF in kernel | userspace program + kernel fast path | netmap logic is ordinary C, no verifier ceiling; you own the crash if it segfaults. |
| attach point | driver RX, pre-stack | NIC ring-level, stack bypass | Both sit before the normal stack; netmap is more radical — even the host stops seeing the packet unless you hand it back. |
| policy-map mutation | BPF map update_elem, µs | any userspace data structure | netmap gives you a C process; your “map” is whatever you allocate. No kernel round-trip, no verifier. |
| policy-program mutation | reload verified program, ms | rebuild + relink netmap app, seconds | eBPF wins clearly here. XDP program reload is cheaper than rewriting, recompiling, and respawning a netmap consumer. |
| ifconfig surface | ip link set dev eth0 xdp obj prog.o | nm_open(“netmap:ix0”) from your process | XDP is declarative attach; netmap is a capability you grab and hold. |
| latency through fast path | hundreds of ns per hop | similar; a few µs via VALE port | Loose, published figures. Our measured sandbox-to-sandbox TCP_RR through the regular pf/bridge path (no netmap) lands at 7 µs p50 — see /appendix/ebpf-to-pf for the full table. |
| maturity on FreeBSD | N/A | in-tree since 11, used in production | pfSense/Netgate, Rubicon, iXsystems appliances all ship netmap paths. |
XDP vs. netmap/VALE for the specific shape we care about: a per-interface programmable fast path with userspace-mutable policy state. Neither tool is “better” in the abstract; they optimize different axes of the same problem.
ifconfig ix0 on a netmap-attached interface looks
unremarkable — the interface remains visible, up, with an address —
until you notice that nothing is going through the host stack. A
ping from the host while a netmap consumer holds the ring
returns “no route to host” unless the consumer chooses to
forward host-stack packets. That is the trap for people migrating from
XDP, where the kernel still owns the default path.
There is one bridge project worth naming:
vale-bpf, also
by Hayakawa, lets you attach eBPF programs to VALE switch ports as
classifiers, with generic-ebpf as the runtime. Benchmarks in the
project README claim ~2% better than Linux’s
XDP_REDIRECT_MAP and ~9% worse than raw VALE. Same
caveats as its parent project: last active circa 2020, shippability
in 2026 is an open question.
DTrace: the observability axis
The observability half of the eBPF story — bpftrace,
bcc, Brendan Gregg’s flame-graph canon — has a FreeBSD
counterpart that predates it: DTrace, ported from Solaris, in-tree
since 9.0. For agent-sandbox observability work the two overlap
heavily. It is worth being specific about where they don’t.
- Unified kernel+userspace tracing. DTrace’s USDT
probes in a userspace process and its
fbt::probes in the kernel compose in one script. bpftrace’s uprobe/kprobe story is fine but feels stapled; the ergonomics aren’t the same. - Stable provider model.
syscall::,proc::,sched::,io::,tcp::,vminfo::are documented interfaces. bpftrace’s tracepoints are the equivalent, but thekprobe::/kretprobe::use for everything-else is fishing in unstable kernel symbols. - Speculative tracing, translators,
sizeof(), forced panics. Gregg’s own 2018 bpftrace (DTrace 2.0) for Linux lists these as not-yet-in-bpftrace. Most Coppice debugging doesn’t touch them.
- Stack traces as variables. eBPF can save
ustack()/kstack()to a map keyed on anything, aggregate across them, emit at exit. DTrace forces you to print them at sample time. - In-kernel aggregation at very high event rates.
A bpftrace script that counts
kmalloccalls by site can do it entirely in-kernel via a hash map. DTrace lifts events through the buffer to userspace more eagerly. - CO-RE and portability. A bpftrace script written
against one kernel runs on another, up to the tracepoint/kprobe
stability story. DTrace scripts are generally more portable across
kernel versions at the provider level but less so at the
fbt::level.
For Coppice: DTrace is the right tool for “why is bhyve spending
so much time in vm_fault” and for “instrument
the pf anchor reload path.” We would miss bpftrace
maybe twice a quarter — the moments where someone wants a one-liner
flamegraph of a running production process. That’s a tolerable gap.
(The AsiaBSDCon 2024 overhead paper has head-to-head numbers; we have not independently re-run them.)
The networking-policy angle — the one that pays the bills
This page exists because the Coppice dataplane runs Linux eBPF today and we have to answer: does the FreeBSD stack achieve policy-update parity? The full per-program translation lives in /appendix/ebpf-to-pf. Here we set the axes.
Coppice’s CubeNet stack uses three eBPF programs (nodenic,
mvmtap, localgw) and a userspace agent that
mutates BPF maps to change allow/deny lists, SNAT port allocations,
host-port mappings. The stress case is thousands of map
mutations per second with no dataplane hiccup.
The FreeBSD answer is a composite:
- pf + tables for L3/L4 allow-deny and for dynamic
set membership.
pfctl -t cube_allow_$id -T replace -f -swaps a table atomically. pf tables use a radix trie; membership lookup is a handful of cache lines. - pf rdr / nat for host-port mapping and SNAT rewrite.
- pf anchors per sandbox for structural partitioning — each sandbox lives in its own anchor, loaded at create, reloaded as a unit when structural rules change.
- dummynet (via ipfw) for rate shaping.
- ng_bridge + ng_ether for L2 fast paths between
TAPs, optionally with
ng_bpf(classic BPF) as a classifier. - netmap / VALE as the escape hatch when pf-in-the-hot-path is too expensive.
The honest verdict on policy-update parity:
- Table-membership mutation (the common case).
pfctl -t T -T add/deleteruns in hundreds of µs to low ms per call. Atomic-T replaceof a whole table with N entries scales linearly up to thenet.pf.request_maxcountlimit (default 65535, commonly raised to 262144 in large deployments; D18909). For the “change this sandbox’s allow-list” shape of mutation, pf tables are within a small constant of eBPF-map updates — enough that sub-1000-mutations/sec workloads will not notice. - Structural mutation (per-sandbox anchor reload).
pfctl -a cube/$id -f -is single to low-tens of ms depending on anchor size. At 1000 sandboxes x 1 structural change per second this is a problem. It’s not the problem eBPF is solving, though — eBPF’s map mutations are the equivalent of pf table mutations, and eBPF program replacement is a verifier-roundtrip that is not materially cheaper than anchor reload. - Per-5-tuple policy decision lookup. eBPF hash
maps keyed on a complex struct are the clean solution. pf tables
are address-keyed; for arbitrary key shapes you hit netgraph
(
ng_bpf, classic BPF expressiveness only) or netmap (C code, full expressiveness, userspace). - Tail-call composition.
bpf_redirect_peerandbpf_tail_callcompose programs in the hot path. netgraph’s nodes compose via message passing; it’s the closest FreeBSD has, with different ergonomics. Cilium-style L7 service-mesh policy on the FreeBSD stack is possible via VPP on netmap, not via pf.
So the bottom line for Coppice, carried forward from /appendix/ebpf-to-pf: yes at paper-parity for the policy shapes we actually use; no at the arbitrary-5-tuple-lookup-at-wire-speed shape we might want later.
Where the gap actually bites
Being honest about the things Linux eBPF does that FreeBSD doesn’t, in decreasing order of how much it hurts Coppice:
- Cilium-style L7 service mesh with in-kernel policy. The FreeBSD workaround is a userspace proxy sidecar (the Envoy-in-a- jail pattern, or netmap-based VPP). This matters for general-purpose container platforms; for agent sandboxes where egress is a small known set of endpoints it matters little. Measured substitute: Envoy is not packaged on FreeBSD 15.0 (absent from the ports tree; upstream Bazel build doesn’t cleanly produce a FreeBSD binary), so we run haproxy 3.2 as the sidecar — native ACLs express method-deny / path-prefix-deny / allow-all-else; no Lua needed. Measured on sbx-a: 10 ms sidecar startup, ~80 µs per-request overhead, 12 ms graceful policy reload. Config + rig: parity-gaps § L7 policy. If and when a working Envoy port lands, the semantics port unchanged — it’s a substrate swap.
- Runtime security observability (Falco, Tetragon). eBPF-based syscall monitoring for container runtime security. FreeBSD’s answer is DTrace + audit + Capsicum. For an agent-sandbox workload that already constrains syscalls via Capsicum and jail rules, the Falco niche is smaller than it is for general k8s. Real gap, smaller bite.
- Ad-hoc
bpftracesprees. Production debugging where someone writes a one-line script against the live kernel. DTrace fills 90% of this; the other 10% is lazy-instrumentation convenience. Low bite for us — our debugging culture is already DTrace-native on the FreeBSD side. - sched_ext. Pluggable CPU scheduler written in eBPF. No FreeBSD equivalent; ULE is ULE. Not a Coppice concern.
- XDP-based DDoS scrubbing. Cloudflare’s
bpffilter-style use. FreeBSD’s answer at this scale is netmap-based scrubbers (already deployed in production by several commercial appliances). Same capability, different authoring model.
The pattern: eBPF is strongest at “kernel programmability as a platform feature.” FreeBSD trades that for “ship what you need in userspace with a well-designed bypass.” For Coppice’s shape — many microVMs, per-VM policy, policy mutations in the low thousands per second — the netmap-or-pf combination is enough. The measurements in /appendix/ebpf-to-pf landed where we expected: 7 µs p50 intra-sandbox RTT, 14.6 Gbit/s TCP, 250k policy-update ops/sec via atomic table replace. Parity, not promise.
References
- Yutaro Hayakawa, eBPF Implementation for FreeBSD, BSDCan 2018 — the defining statement of scope for a FreeBSD eBPF port.
- generic-ebpf on GitHub — runtime, last meaningful commit 2021-05-28. Unmaintained as of 2026; verify build before relying.
- libbpf-freebsd — experimental libbpf port, same maintenance status as parent.
- vale-bpf — eBPF-programmable VALE ports; dormant, numbers in the README are unverified by us.
- Luigi Rizzo, netmap: A Novel Framework for Fast Packet I/O, USENIX ATC 2012 — still the reference.
- netmap on GitHub — mirror of in-tree FreeBSD code plus Linux patches.
- Rizzo & Lettieri, VALE: a switched ethernet for virtual machines, ACM CoNEXT 2012.
- Brendan Gregg, bpftrace (DTrace 2.0) for Linux, 2018 — the authoritative DTrace-vs-bpftrace capability list.
- Benchmarking Performance Overhead of DTrace on FreeBSD and eBPF on Linux, AsiaBSDCon 2024 — we cite the abstract; full numbers not independently reproduced here.
- Klara Systems, Inside FreeBSD Netgraph — the good current reference on netgraph’s model and its pfil/ip_fastforward neighborhoods.
- pfctl(8), D18909 (user-facing
net.pf.request_maxcounthint) — the pf-tables mutation ceiling mechanic. - FreeBSD Status Report Q3 2025 — no eBPF entries; verified by direct read.
- Unverified: Matt Macy and eBPF — referenced in freebsd-net@ folklore, no tree or series we could find. Flagged for a later pass.
- Unverified: independent FreeBSD 15-era build of generic-ebpf. Build attempt not run for this page.