This is a threat-model whitepaper, not a marketing page. The
audience is a security engineer who has to say yes or no to
running Coppice inside a regulated environment, a compliance
officer mapping Coppice’s primitives to SOC 2 / FedRAMP / HIPAA /
ISO 27001 controls, and a DevSecOps engineer who wants to know
what actually happens to an agent’s rm -rf /. Every
claim here is cited in an appendix page on this site, or marked
explicitly as “not measured” / “we don’t claim this.” Compliance
is a discipline of receipts; so is this document.
1. The threat we address
The specific workload Coppice is built for is untrusted code generated by a language model, executed on behalf of a user. The code is not reviewed by a human before it runs. It is often generated in response to an adversarial or adversarially-steerable prompt. Occasionally the LLM is itself compromised, by prompt injection or by training-data poisoning, into emitting code that attempts to misuse the sandbox rather than solve the user’s task.
Against that, Coppice must prevent five outcomes:
- Escape to the host. The sandbox must not execute instructions outside its own jail or microVM.
- Cross-tenant access. Sandbox A must not read or write sandbox B’s filesystem, memory, or network traffic, regardless of whether the two sandboxes share a host or a user.
- Exfiltration past a configured boundary. If the operator has configured the sandbox in air-gap mode or with a specific egress allowlist, the sandbox must not move data out through any path — TCP, UDP, ICMP, DNS, or implied side channels like HTTP-over-DNS tunnels.
- Persistence past TTL. When the sandbox’s lifetime ends — reaper expiry, explicit delete, operator forced-kill — nothing the sandbox wrote survives into the next tenant’s workspace.
- Denial of service to the host or siblings. A single sandbox cannot starve the host of CPU, memory, disk, or network to the point of degrading other sandboxes or the gateway itself.
The explicit non-goals are more interesting than the goals. Coppice is a substrate. It gives operators the primitives for strong isolation and a receipts-based posture. It does not:
- Replace an agent harness that leaks credentials into prompts. If your orchestrator pastes production database passwords into the conversation, Coppice’s jail boundary is not going to help you. The sandbox will faithfully exfiltrate them on the operator’s behalf, because that’s what the operator asked for.
- Compensate for a compromised hypervisor or a compromised
FreeBSD kernel. Coppice’s trust root is the FreeBSD base system
plus the two kernel patches in
patches/(reviewed, audited, and tracked against upstream submission). If a pre-auth kernel CVE lands, the substrate is affected, and the operator’s patch cadence is the mitigation. - Defend against physical attackers. GELI-encrypted root on the host protects data at rest through a power-off event, but an attacker with bus-level access to an unlocked, running server has already won.
The reason FreeBSD is a good substrate for this workload is less
romantic than the usual pitch: its isolation primitives —
jail(8), VNET, pf(4), ZFS,
rctl(8), signify(1), bhyve(8)
— are all in base, configured through the same syntax,
audited by the same team, and do not rely on third-party kernel
modules that need their own patch cadence. Everything in this
document is upstream FreeBSD plus two small patches; there is no
out-of-tree container runtime, no third-party hypervisor, no
vendored security kernel module sitting underneath the trust
story.
2. The trust boundary
Coppice operates a single trust boundary per sandbox. The
boundary shape depends on the template’s declared backend:
vnet-jail (the default, for shell / Python /
language-tool workloads) or bhyve (for workloads
that need a different kernel, persistent snapshots, or an
independent kernel attack surface).
Jail boundary
A VNET jail is a FreeBSD jail(8)1
with the following namespaces separated from the host:
- Process namespace. The jail’s PID 1 is not the
host’s PID 1.
ps(1)inside the jail cannot see host processes, and — critically — cannotkill(2)them even if it guesses a PID.jail_attach(2)is the only cross-jail call, and it requiresPRIV_JAIL_ATTACHon the host, which the jail’s root user does not have. - Mount namespace. The jail mounts its own
root dataset (a ZFS clone, covered below), with the host’s
/etc,/root,/usr/local,/varnot visible unless explicitly nullfs’d in. By default Coppice’s jails see only their clone; volume mounts are opt-in, per-sandbox, and documented in persistent volumes. - Network namespace.
vnet=newhands the jail its own independent network stack: its own routing table, its own ARP cache, its ownpfstate, its own loopback. The only path out is theepairb-end that’s been plumbed into the jail, which terminates on thecoppicenet0bridge on the host side. See VNET jails. - UID / GID namespace. The jail’s uid 0 is a
jailed root, not host root. It cannot
mknod(2)on host devices, cannot load kernel modules, cannot mount arbitrary filesystems, cannot set system-wide sysctls. The allowlist of permitted operations is the jailallow.*knobs, each of which defaults to off. - Capsicum.2 Available as a further narrowing layer. Coppice does not currently wrap the gateway’s subprocess execution in a Capsicum sandbox; that’s an open row in the feature audit and called out as future work.
bhyve boundary
A bhyve3
microVM is a full hardware-virtualized guest. The boundary is
stronger than a jail in kind, not just in degree. The guest runs
its own kernel, sees its own (virtual) hardware, and cannot even
observe the host’s kernel data structures except through the
bhyve device model’s explicit interfaces (virtio-blk, virtio-net,
virtio-console). An escape from a bhyve guest requires exploiting
a vulnerability in bhyve(8) itself or in the
vmm(4) kernel module underneath it — a narrower
attack surface than the full jail syscall set, and narrower still
than qemu or VirtualBox.
Coppice’s bhyve posture uses the vmm-memseg-vnode
kernel patch plus the bhyve-vnode-restore userland
patch (both in patches/). The patches let multiple
microVMs share a single backing vnode for their guest memory
with per-guest copy-on-write, which is what makes the density
numbers in all numbers possible.
From a threat-model perspective, the important property is that
CoW is one-directional: a guest’s writes become private
to that guest, and cannot bleed back into the backing template
or into siblings. See vmm-vnode patch
and KSM-equivalent for the
mechanics.
ZFS dataset boundary
Every sandbox gets a ZFS clone4
of its template’s signed @base snapshot. The clone is
a first-class dataset with independent access control and its own
guid. Writes land in the clone; the template and siblings are
untouched. On teardown, the clone is destroyed; the blocks it
held that weren’t referenced by any other dataset are released.
The template side of this boundary is covered in
image signing: every template
@base snapshot has a signify(1)5
signature over its ZFS guid, verified at clone time. An attacker
who gets write access to the template dataset changes its guid;
verification fails; the clone is refused. An attacker who swaps
a signature file for another valid one still fails, because the
recovered guid will not match the live snapshot’s guid.
pf anchor boundary
Network policy lives in per-sandbox pf(4)6
anchors at coppice/sandbox-<short>. Because
VNET jails have their own IPs on coppicenet0
(10.78.0.0/24), every rule is source-IP-scoped —
from 10.78.0.42 rather than from any
plus a uid tag. That is a cleaner semantics than
ip4=inherit could give us, and it’s what makes the
air-gap story believable: a block quick from 10.78.0.42 to any
terminal rule actually covers all of that sandbox’s egress, not
just its egress through the host’s primary IP.
Each layer is independently measurable: the audit table in feature audit carries one row per boundary, and each closed row names the file and test that proves the boundary holds.
3. Isolation guarantees by layer
This section walks each layer from wire to application and names what Coppice does, what it deliberately doesn’t do, and where the measurement lives.
L2 network
Jails attach to coppicenet0, a bridge created and
owned by Coppice, living in the 10.78.0.0/24 space.
bhyve microVMs attach to coppbhyve0, a disjoint
bridge in 10.77.0.0/24. Both bridges are Coppice’s —
no other service on the host puts interfaces into them, and honor’s
own LAN interface (re0) is not a member. Broadcast
traffic from sandboxes terminates at the bridge; the host kernel
never sees it on its LAN interface.
NAT out to the world goes through vm-public, honor’s
egress bridge, via a nat on vm-public from 10.78.0.0/24 to any -> (vm-public)
rule in the coppice/jail-nat anchor. The choice of
vm-public over re0 is load-bearing:
honor’s root pf has set skip on re0 for
operational reasons (ssh uptime during pf reloads), so NAT on
re0 would silently fail. See
VNET jails for the full bring-up.
L3 network
Each sandbox holds one IPv4 address from the bridge pool
(.10 – .250, managed by the
IpAllocator in e2b-compat/src/ipalloc.rs).
pf rules are source-IP scoped, which gives the
operator a filter axis that matches how policy is naturally
expressed: “this sandbox may talk to api.openai.com
and nowhere else.” The policy surface is
PUT /sandboxes/:id/network, documented in
eBPF → pf.
The honest call-out: sandbox-to-sandbox traffic on the same
bridge is permitted by default. Two sandboxes in the same
/24 can reach each other. For most agent workloads
this is the right default — an operator running a multi-agent
planner wants the agents to be able to call each other — but it
is a trust choice, and it is the one thing on this page that
deserves a fresh look from each operator before production. The
workaround is a deny_out CIDR covering
10.78.0.0/24 in templates where the sandboxes should
never talk to each other.
Filesystem
The sandbox’s root filesystem is a ZFS clone of its template’s
@base. The host’s /etc,
/usr/local, /root, and
/var/db/coppice (which contains the signify pubkey
and template sigs) are not mounted into the jail.
Host SSH keys, shell history, operator credentials, and the
gateway’s own state directory are all outside the jail’s mount
namespace.
Persistent volumes (persistent volumes) are the controlled opt-in: an operator can attach a ZFS dataset to a sandbox via the volume API, and that dataset is nullfs-mounted into the jail at a chosen path. Volumes are per-sandbox by default; a volume shared between two sandboxes is explicit and requires the operator to pass the same volume ID to both. “You get a shared volume only if you ask for it” is the rule.
Compute
rctl(8)7
enforces per-sandbox caps on CPU (pcpu), wired and
RSS memory (memoryuse, memorylocked),
open file count (openfiles), and per-process swap.
Limits are set at jail create time from the template, live in
the jail’s racct context for the lifetime of the jail, and are
visible to the reaper. A sandbox that tries to exceed its memory
cap sees its processes killed by the kernel’s OOM-under-rctl
path, not the host’s global OOM killer; a sandbox that burns
CPU past its pcpu cap is throttled, not the host’s
other workloads.
The reaper enforces the sandbox TTL. An expired sandbox gets
jail -r’d (which kills every process inside via the
jail-exit path), its epair pair destroyed, its ZFS
clone destroyed, its pf anchor flushed, and its IP returned to
the allocator. None of those steps is skippable from inside the
jail; the jailed root has no capability to prevent them.
Process
jail -r kills every process inside the jail,
regardless of PR_SET_PDEATHSIG or
signal(2) ignoring — the kernel hands every PID in
the jail a SIGKILL and waits. A sandbox cannot
survive its parent gateway’s request for it to die. For bhyve
microVMs the equivalent is bhyvectl —destroy,
which immediately tears the VM down; a durable pause uses
bhyvectl —suspend plus a snapshot (see
durable snapshots).
Secrets
The gateway runs as root on honor, which is the reality of a
system that has to create jails, manipulate ZFS datasets, and
reload pf rulesets. The SSH keys the bhyve shim uses
to reach pooled microVMs live at /root/coppice-signing/
on honor and are never mounted into any sandbox.
The signify pubkey at /etc/coppice/pubkey is
world-readable by design (it’s a pubkey); the matching privkey
never touches the gateway host at all — it lives on the operator’s
laptop, used only when the operator signs a new template.
Coppice today does not have a built-in “inject a secret into a
sandbox” primitive. If an operator wants a sandbox to hold a
specific API key, they pass it in the sandbox’s env
map at create time, or they mount a volume containing a secrets
file. Both paths are explicit, auditable at create time, and
don’t leak into other sandboxes.
4. Attack surface
This section enumerates paths an adversary might attempt. The structure is: what the attack is, what Coppice does about it, and what we honestly don’t defend against.
Jail escape via FreeBSD kernel CVE
The jail boundary is a kernel-side construct. A sufficiently privileged kernel bug — memory corruption in a syscall reachable from an unprivileged user, a TOCTOU in the jail attach path, a refcount bug in ZFS — can in principle be used to break out. FreeBSD’s jail subsystem has an unusually clean track record, but the category of “kernel bug reachable from inside a jail” is not hypothetical.
Coppice inherits FreeBSD’s mitigation story: freebsd-update
for base, pkg upgrade for ports, and boot environments
(bectl(8)) for reversible upgrades. The operator’s
patch cadence is the mitigation surface. Compliance postures that
require monthly patch review are straightforward to hook in —
every honor deployment is one server, not a fleet.
bhyve hypervisor escape
A bhyve escape is a narrower attack surface than a jail escape:
the guest must find a bug in the bhyve device model (virtio-blk,
virtio-net) or in the vmm(4) kernel module. bhyve is
written in C, has a small surface, and has no pre-auth network
parsers of the kind that have historically produced qemu CVEs.
That is not a claim of invulnerability — it is a claim that the
code we have to trust is a few thousand lines rather than several
hundred thousand. For workloads where an LLM-generated syscall
sequence is the threat, microVMs buy a strictly stronger
boundary at the cost of slower starts and per-VM memory
overhead. All numbers has the
latency differentials.
ZFS-level supply-chain attack
An attacker who can write to a template dataset tries to poison
the template before it’s cloned.
Image signing is the
answer: template @base snapshots have their ZFS
guid signed with signify(1), and the gateway refuses
to clone a template whose live guid doesn’t match the signed
guid. Set COPPICE_REQUIRE_SIGNED_TEMPLATES=1 in the
gateway’s environment to harden “missing signature” from a
warning into a 403.
This does not defend against an operator signing a template that was already backdoored before the sign step. Supply-chain hygiene up to signing is the operator’s problem; the gateway asserts only “this is the thing the operator signed.”
Side-channel attacks
Spectre, Meltdown, L1TF, MDS, and the parade of microarchitectural
side channels are not solved at the sandbox layer. FreeBSD ships
mitigations (hw.spec_store_bypass_disable,
hw.mds_disable, vm.pmap.pti for
Meltdown), controlled via sysctl. Operators in regulated
environments should enable the full mitigation set, accept the
single-digit-percent throughput cost, and note it in their
deployment record.
Coppice does not claim side-channel resistance beyond “we don’t
disable FreeBSD’s mitigations.” Two sandboxes co-resident on the
same host can in principle time-stamp cache-line evictions on
each other, flush-reload known-gadget addresses, and exchange
low-bandwidth covert data. If your threat model requires strict
non-observability between tenants, the answer is don’t run
them co-resident: per-tenant pools pinned to disjoint CPU
sets (cpuset(1)), or a per-tenant physical host.
This is a stated limitation, not a gap.
Denial of service
A noisy sandbox tries to exhaust the host. rctl’s
per-jail caps stop the most common shapes: a fork bomb hits the
maxproc limit, a memory hog hits memoryuse
and gets OOM-killed locally, a disk filler hits the dataset’s
ZFS quota. The gateway’s pool-size cap prevents an attacker from
creating a million empty sandboxes to exhaust the allocator. The
bench rigs in all numbers include
a density test at 1000 concurrent VMs on a single host that does
not tip over, which is the weakest plausible evidence that the
scheduling primitives hold under load.
Network DoS against the bridge is a harder story. Rate limits
via pf queues are configurable per-anchor but are
not on by default; operators in DDoS-concerned postures should
enable them on the per-sandbox anchor.
Exfiltration via DNS / SNI / ICMP
With allow_internet_access=false, the per-sandbox
pf anchor installs a default-deny terminal rule
(block quick from 10.78.0.42 to any). TCP, UDP, and
ICMP are all blocked. DNS is blocked unless the operator has set
COPPICE_DNS_ALLOWLIST, in which case only the named
resolvers are reachable. TLS SNI tunneling is just TCP, so it’s
covered by the same rule. ICMP tunneling is blocked at the IP
layer. See air-gapped for the
full fragment and the smoke-test rig.
For non-air-gapped sandboxes, the operator’s deny_out
CIDR list narrows the blast radius. The policy surface is live
(PUT /sandboxes/:id/network) and reload is atomic
per-anchor, so an operator who spots exfiltration can slam the
door during the active session without killing the sandbox.
Persistent store poisoning
An attacker tries to write malicious state into a volume that a later sandbox will mount. Coppice’s default is per-sandbox volumes: a volume is created for one sandbox and deleted with it. Shared volumes exist but are explicit — the operator names the same volume ID in two sandbox-create calls. If you don’t want shared volumes, don’t create them; there is no ambient shared scratch space.
ZFS dataset guid is assigned at snapshot time and
survives zfs send | zfs receive, so a template
re-imported from backup preserves its signed identity. If the
backup is compromised, the re-imported dataset’s guid still
matches whatever it was when signed — which is precisely the
“signed bad content” case, not something the signing scheme is
designed to catch.
Supply chain: template and OCI imports
The two import paths are coppice tpl sign (operator
signs a locally-built template) and the OCI-import path
(OCI templates, which pulls
an image, converts it to a ZFS dataset, and runs the sign step
on the result). Both end at the same gate: the template’s
@base snapshot gets a signed guid, and nothing is
clonable without that signature under
COPPICE_REQUIRE_SIGNED_TEMPLATES=1.
Key rotation is the human ceremony: generate a new signify
keypair on the operator’s laptop, install the new pubkey at
/etc/coppice/pubkey, re-sign every template against
the new key, delete the old sig files. A gateway restart picks
up the new pubkey. There is no key agility in the sig format;
one key is live at a time. If you need to roll, roll everything
at once.
Gateway compromise
The gateway binary at e2b-compat runs as root and has
every capability Coppice has. If it is compromised, the attacker
has everything. Today’s mitigations are small and honest:
- The gateway’s network listener is scoped to the LAN; operators front it with MFA-gated SSH to honor rather than exposing port 3000 directly.
- Every mutation is logged via
tracingand exported over OTLP (trace export). - The binary is built from a single-repo Rust project with audited dependencies; there is no plugin mechanism, no dynamic code load, no out-of-tree handler registration.
What we don’t do today, and what belongs on the future-work list:
wrap the gateway’s subprocess execution (the bit that shells out
to jail, zfs, pfctl,
bhyvectl) in a Capsicum sandbox. That would turn a
gateway RCE into something narrower than full host compromise.
It’s an open row in the audit and is called out in section 9.
5. What we don’t claim
A compliance document that over-claims costs more trust than the claims it can’t back up are worth. Here is the list of things Coppice does not currently do.
- No formal verification. Coppice is not a formally-verified sandbox. There are no seL4-style proofs of non-interference. The isolation story is empirical: FreeBSD’s primitives, documented behaviour, published CVE history, operator-visible receipts.
- No anti-fingerprinting. A sophisticated
attacker inside a jail can tell it’s in a jail (the hostname,
the
sysctltopology, the jid in/proc-equivalents, the MAC address prefix on theepairinterface). Similarly for bhyve: the CPUID leaf, the virtio device prefixes, the clock source. If your threat model requires that the sandbox not know it is a sandbox, Coppice is not the right tool. - No multi-tenant RBAC. Two users of the same Coppice gateway share a trust root: the operator. There is no built-in concept of “user A cannot see user B’s sandboxes.” Operators who need multi-tenancy run one gateway per tenant today, with per-tenant ZFS pools and disjoint CPU sets. A proper tenancy model is tracked as future work.
- No SIEM connector out of the box. Coppice exports OpenTelemetry spans (trace and metric) via OTLP, which most modern SIEMs accept natively, but there is no preformatted audit-log shipping in Splunk / CEF / LEEF formats. If your compliance posture requires a specific log format, budget for a small translation layer. The raw signal is there; the wire-format conversion is on the operator.
- No side-channel prevention between co-resident sandboxes. See section 4. The mitigation is “don’t co-locate workloads from distrusting tenants,” enforced by pool-per-tenant separation at the host layer.
- No third-party audit. No penetration-test report, no FedRAMP authorisation package, no HIPAA attestation letter. The substrate is auditable; the audit has not happened. Section 9 notes this explicitly.
- No hardware-root-of-trust attestation. Coppice does not measure or attest the host’s boot chain. If your compliance posture requires TPM-based remote attestation of the hypervisor, that integration is not shipped; the operator runs a conventional trusted-boot posture on the host and documents it out-of-band.
- No in-sandbox secret management. There is no
built-in Vault-like secret broker. Secrets an operator wants a
sandbox to hold are passed through the sandbox’s
envmap or a mounted volume, both of which are explicit and auditable, neither of which is a replacement for a dedicated secret manager. Operators integrate their own. - No built-in rate limiting of sandbox creation. The gateway will create as many sandboxes as resources allow. Anti-abuse rate limiting (cost-based, per-caller, per-template) belongs in front of the gateway, in the operator’s API authentication layer, not in Coppice itself.
The pattern of all of the above is the same: Coppice sits one layer beneath where these features naturally live. An operator integrating Coppice into a compliant environment puts their own IdP, SIEM, secret manager, and API gateway in front of it. What Coppice provides is the isolation substrate underneath — the part that is genuinely novel about running on FreeBSD metal.
6. Compliance mapping
This section maps Coppice primitives to specific control families in the major compliance frameworks. The claim throughout is “substrate suitable for control X” — Coppice gives the operator the mechanisms required to meet the control; the operator still has to configure, document, and evidence the deployment. No certification is claimed.
SOC 2 / ISO 27001 (Trust Services + ISO Annex A)
- CC6.1 / A.8.3 Logical access controls. Jails and pf anchors enforce per-sandbox access to kernel and network resources. The gateway’s API surface is the single control plane; MFA-gated SSH to the host is the only operator-side entry. Receipt: VNET jails, eBPF → pf.
- CC6.6 / A.8.22 Segregation of networks.
Sandboxes live on dedicated bridges (
coppicenet0,coppbhyve0) disjoint from the host’s LAN. Egress passes through a specific NAT rule onvm-public, not the host’s primary interface. - CC6.7 / A.8.24 Transmission of information.
pfanchors enforce egress policy per sandbox. Air-gap mode removes egress entirely except for an optional DNS allowlist. Receipt: air-gapped sandboxes. - CC7.1 / A.8.16 Monitoring. OTel tracing of every sandbox lifecycle event; per-sandbox Prometheus metrics for CPU, memory, disk, network. Receipts: trace export, per-sandbox metrics, per-sandbox logs.
- CC8.1 / A.8.32 Change management. Template
mutations are explicit, signed, and versioned by ZFS snapshot
name. Signed
@basesnapshots provide an auditable record of what each sandbox was started from.
FedRAMP / NIST SP 800-53
- AC-3 Access Enforcement. Jail kernel namespacing plus pf source-IP filtering; each sandbox’s access decision lives in the jail’s own VNET stack and in a named per-sandbox anchor. Decisions are enforced at the kernel layer, not in userland.
- AC-4 Information Flow Enforcement. pf
anchors enforce allowed / denied flows per sandbox; the
air_gappedmode is a first-class default-deny. Operator can scope flows to specific CIDRs live without restarting the sandbox. - AU-2 Audit Events / AU-3 Content. OTel spans cover sandbox create, destroy, command execution, filesystem operations, network-policy mutations. Span content includes sandbox ID, template name, user-provided labels, timing. Receipt: trace export.
- CM-5 Access Restrictions for Change / CM-6 Configuration Settings. Template signatures (image signing) enforce that only operator-signed templates clone. Gateway configuration is single-file and version-controlled in the deployment repo.
- CP-10 System Recovery and Reconstitution.
ZFS snapshots of pool state; boot environments
(
bectl(8)) for host-level rollback; durable sandbox snapshots (durable snapshots). - SC-7 Boundary Protection. pf anchors per sandbox; dedicated bridges disjoint from host LAN; air-gap mode available per template or per sandbox.
- SC-13 Cryptographic Protection.
signify(1)signatures over template guids; operator-controlled keypair; ZFS native encryption available for data at rest (operator enables per pool). - SC-28 Protection of Information at Rest. GELI-encrypted boot disk on the host (honor already does this); ZFS native encryption on tenant datasets (operator opt-in, documented in the deployment checklist below).
- SI-7 Software, Firmware, and Information Integrity.
Signed templates with guid binding; verification on every
clone; Prometheus counter for verification failures
(
coppice_template_verifications_total{status=“invalid”}) suitable for alerting. Receipt: image signing. - SI-4 System Monitoring. OTel + per-sandbox metrics give continuous telemetry; per-sandbox log ring buffers preserve recent stdout/stderr for investigation.
HIPAA (Security Rule §164.312)
HIPAA’s Security Rule is framework-agnostic about mechanism and strict about outcome. The mechanisms Coppice provides map to:
- §164.312(a)(1) Access control. Jail kernel boundaries; per-sandbox pf anchors; operator-controlled template signing.
- §164.312(b) Audit controls. OTel spans for every mutation plus per-sandbox metrics. Retention is operator policy — Coppice produces the signal; the operator’s SIEM retains it.
- §164.312(c)(1) Integrity. Signed templates; ZFS checksums on all dataset reads (native to ZFS, not a Coppice addition, but worth naming).
- §164.312(e)(1) Transmission security. The gateway’s HTTP surface is operator-configured — TLS termination is normally handled by the operator’s reverse proxy, not Coppice itself. Coppice does not ship a default TLS configuration; that’s a deployment concern.
Coppice does not handle PHI directly; it handles whatever a sandbox’s code handles. A HIPAA deployment has to consider sandbox contents, not just isolation.
Gaps to name explicitly
A few controls that Coppice does not directly support, called out here rather than quietly skipped:
- AC-2 Account Management — Coppice has no built-in user directory. Operators integrate their own IdP at the gateway’s API surface (reverse proxy, OAuth middleware).
- AU-11 Audit Record Retention — Coppice emits spans; retention is the operator’s SIEM concern.
- IA-2 Identification and Authentication — Same as AC-2. The gateway trusts its caller; call authentication is an upstream layer.
- CM-2 Baseline Configuration — The operator maintains this. Coppice’s own configuration is version- controlled in the deployment repo; a “baseline” in the CM-2 sense is an operator artifact.
7. Deployment posture checklist
Operator-facing. The numbered list is what to tick off for a compliance-sensitive Coppice deployment. Each item maps to a mechanism described above; the order is setup, then run-time configuration, then operational hygiene.
Setup:
- Dedicated host hardware. No shared hypervisor between Coppice and other workloads. Honor’s pattern (one FreeBSD box, one Coppice gateway, nothing else) is the reference.
- GELI-encrypted boot disk. Every reboot requires the operator’s disk passphrase. CLAUDE.md documents this for the reference deployment.
- ZFS native encryption on
zrootor at minimum on the jails pool (zroot/jails). The operator holds the keys; the gateway runs as a root-trusted process that decrypts at pool-import time. - Operator signify keypair generated on an
operator-controlled laptop, never on the gateway host.
Pubkey installed at
/etc/coppice/pubkey; privkey stays offline. - COPPICE_REQUIRE_SIGNED_TEMPLATES=1 in the gateway’s environment. Missing signatures become 403s; no template can clone without operator attestation.
Run-time configuration:
- Air-gap default for production templates.
Templates that run untrusted code should default
allow_internet_access=false; operators who need egress for a specific sandbox toggle it explicitly. - OTLP endpoint configured pointing at the operator’s collector / SIEM. Unset means stderr-only, which is a dev posture, not a compliance posture.
- Reaper TTL set to operator policy. Hours for untrusted ad-hoc work; days for long-running agent sessions. Sandboxes without an explicit TTL get the global default; document that default.
- Network policy mutations are audit-logged.
Every
PUT /sandboxes/:id/networkemits an OTel span; the operator’s SIEM ingests them. Coppice does not silently drop a flip from permissive to denied — every change is a span. - Per-sandbox CPU / memory caps set in the template at values appropriate to the workload. Defaults are conservative; raise with intent.
Operational hygiene:
- MFA-gated SSH to the gateway host. The gateway has root; anyone who reaches it shell-side has root.
- Monthly patch window for
freebsd-updateandpkg upgrade. Usebectl(8)to create a pre-upgrade boot environment so rollback is one reboot. - Snapshot-delete on teardown. Sandboxes are ephemeral; their ZFS clones are destroyed at reaper time. Don’t keep per-user clones around past TTL + retention policy. The default is “destroy on teardown”; verify your deployment honors it.
- Key rotation rehearsal — at least once, roll the signify keypair end-to-end on a staging deployment, so the operational runbook is not a theoretical document.
- Sandbox density budget — know your
concurrent sandbox ceiling before hitting it. The 240-IP
allocator range on
coppicenet0is the hard cap on concurrent jails; bhyve microVMs are RAM-bounded. See all numbers for measured ceilings on honor-class hardware.
8. Reference implementation
Every claim in this document corresponds to one or more of the following appendix pages. This is the index; if a claim above lacks a link, the receipt is in here.
- VNET jails — the per-sandbox
IP + bridge + epair layout, the subnet split between
coppicenet0andcoppbhyve0, and the DNS story vialocal_unboundon the bridge gateway. - Air-gapped sandboxes — the
per-anchor pf fragment for
allow_internet_access=false, the DNS allowlist mechanics, and the smoke-test rig. - Image signing — the
signify(1)/ ZFS-guid pipeline, the sig file layout, and the create-time gate. - Trace export — OTel spans and the OTLP wire format; how spans map to operator-visible events.
- Per-sandbox metrics
— rctl sampling, Prometheus label shape, and the
GET /sandboxes/:idembedding. - Per-sandbox logs — the in-memory ring buffer for sandbox stdout/stderr and the since-filter API.
- Feature audit — every row is either closed with a receipt, partial with scope, or open with a tracking reference. The threat-model claims above map to specific closed rows.
Honor itself — the FreeBSD bench box documented throughout this
site — is the reference deployment. It runs with GELI-encrypted
root, ZFS native encryption, a signify pubkey at
/etc/coppice/pubkey, and an OTel collector
configured at tools/otel/collector.yaml for
development smoke tests. The deployment checklist above is what
a production-grade honor would do in addition.
9. Where to from here
Future work is honest when it’s on a list somebody can check against. The following items are in scope for ongoing work on Coppice’s compliance surface, in rough priority order:
- Capsicum-wrap the gateway’s subprocess executor.
The gateway shells out to
jail,zfs,pfctl,bhyvectl. Wrapping those shell-outs incap_enter(2)or running them undercapsicumizer-style confinement would turn a gateway RCE into something substantially narrower than full host compromise. Tracked as an open row in the feature audit. - SIEM connector library. OTel → Splunk HEC, CEF, LEEF. A thin translation layer shipped as an optional sidecar, so operators whose SIEM doesn’t ingest OTLP natively don’t have to roll their own.
- Multi-tenant org / RBAC. A first-class
tenancy model with per-tenant signing keys, per-tenant
zrootdatasets, and gateway-side access control. The current model (one gateway per tenant, enforced by deployment discipline) is fine for small deployments and awkward for larger ones. - Third-party security audit. An external penetration test against a reference deployment, with a public report. The primitives are stable enough to be worth testing; the finding surface is narrow enough that a focused review should produce a tractable remediation list.
- Startup reconstitution for the IP allocator. Documented as a correctness gap in VNET jails; a gateway restart with live sandboxes can in principle double-allocate an IP. Rare in practice; should still be closed.
This document is a living artifact. When a row in the feature audit flips from open to closed, the corresponding section here gets updated. When an attack category surfaces that isn’t covered, it goes into section 4 with the honest caveat attached. The goal is not a document that looks perfect; the goal is a document that operators can point at when their auditors ask “how does this sandbox actually work.”