Persistent volumes

A Coppice volume is a named ZFS dataset under zroot/jails/volumes/<name> that survives the sandbox that last touched it. Sandboxes attach one or more volumes at create time; the gateway null-mounts each into the jail root before jail -c, so the volume is visible from the first syscall. Multiple sandboxes can mount the same volume concurrently — rw or ro, in any combination — because nullfs(5) doesn’t demand exclusivity. The typical pattern is one rw sandbox building artifacts and many ro sandboxes consuming them, but nothing in the stack prevents N × rw if the user wants to manage the races themselves.

Primitive: nullfs(5), not bind-mount

Linux uses mount --bind for this job; on FreeBSD the equivalent is mount -t nullfs. The mechanics are identical from the filesystem’s point of view — the null VFS layer translates vnodes from the lower filesystem to the mount point — but the kernel wiring is different enough to matter for our layout:

The dataset layout

Every volume is its own ZFS filesystem:

zroot/jails/
├── _template@base              # default sandbox template
├── volumes/
│   ├── shared-code             # volume "shared-code"
│   ├── cache                   # volume "cache"
│   └── data                    # volume "data"
└── e2b-<sandbox-id>/           # per-sandbox clone, short-lived

zfs create zroot/jails/volumes/<name> is a sub-second operation and hands us the full ZFS surface: snapshots, clones, quotas, replication. v1 surfaces only quota (sizeMB on create); everything else is available to an operator who sshes in, and surfaces like coppice volume snapshot are tracked for a follow-up.

The pool mountpoint determines the host-side source path we mount -t nullfs from. On honor that’s /jails/volumes/<name>; on any other pool layout the operator either matches that convention or sets a future —volumes-root override. The backend doesn’t probe zfs get mountpoint on every create — the dataset is canonically under <jails_root>/volumes/<name> and a divergent setup is explicit operator policy.

Multi-mount semantics

nullfs is happy to null-mount the same host path into multiple jails, both rw, concurrently. The registry tracks every live attachment as (volume_name → [{sandbox_id, path, readonly}]) and a DELETE /volumes/:name with any active mount returns 409 in use, listing the count so the operator knows to stop the dependents first.

Write-coordination is the user’s problem. Two sandboxes mounting the same volume rw and editing the same file at the same time interleave exactly the way POSIX says they will — no flock, no broker, no version vector. The usual shape operators reach for:

The registry file

/var/lib/coppice/volumes.json is a flat JSON array, oldest-first by createdAt. The write path is atomic: tmpfile + rename(2), parent directory created on first write. Same shape as the snapshot registry (snapshots.json), so an operator who knows one knows both. Example:

[
  {
    "name": "shared-code",
    "dataset": "zroot/jails/volumes/shared-code",
    "sizeMB": 2048,
    "createdAt": "2026-04-22T12:00:00Z",
    "mounts": [
      { "sandboxID": "abc123", "path": "/workspace", "readonly": false }
    ]
  }
]

The mounts array is the source of truth for “is this volume in use?” — it’s updated synchronously by the backend as sandbox create / kill run. Startup reconciliation cross-checks the dataset against zfs list -t filesystem and drops entries whose filesystem was destroyed out-of-band, then clears every mounts list (see the gap below).

Gap: mount reconstitution after a gateway restart

The backend’s reconcile_from_jls adopts surviving jails for IP + teardown purposes, but does not re-attach their volume mounts to the registry. The missing piece is “for jail e2b-<id>, which nullfs mounts from /jails/volumes/* are live?” — answerable via mount -t nullfs output parsing, but we punt to a future rev. Operators who kept volumes on pre-restart sandboxes and restart the gateway see the mounts survive on the host (nullfs is kernel-state-only, immune to gateway churn) but see an empty mounts array in the registry. A DELETE /volumes/:name in that state will happily succeed — operator has to umount by hand first. Documented here, reachable via mount | grep nullfs.

Why this is the FreeBSD-native answer

Cube’s SDK has a volumeMounts field whose current accepted-and-ignored shape was a parity gap in the audit until this row. The honest answer on FreeBSD isn’t “port a container volume driver” — it’s “ZFS has datasets, nullfs has the mount, we already have both in base.” Volume lifecycle becomes ZFS lifecycle; volume sharing becomes nullfs composition. No new subsystems, no new persistence layer, no new daemon. The sandboxes get a new capability and the gateway gets ~600 lines of Rust.

Cross-references