Concepts
bones gives multi-agent processes two things that branch-based VCS plus an external broker can’t cleanly provide together: a durable, multi-writer code+state surface and a live coordination channel. This page sketches the moving parts.
Substrate
The substrate is the shared layer agents read and write through:
- Fossil (via
libfossil) — content-addressed DAG, divergence-tolerant, autosync. Stores code, structured task state, and the timeline as the audit trail. - NATS (via
EdgeSync’s leaf daemon) — KV with TTL for file holds, pub/sub for chat, request/reply for direct asks.
Agents don’t pick one or the other; they call into a single coord.Coord that fans out to whichever side fits.
Workspace
A workspace is the directory tree rooted at the .bones/ marker. bones init creates one (or rejoins from a descendant). Pre-rename .agent-infra/ markers are silently migrated on first touch. It carries:
- The leaf PID and log
- The local Fossil repo (or a pointer to a sibling)
- Per-agent checkout directories under
checkouts/<agent-id>/
Agents inside the workspace share the substrate; agents in different workspaces are independent.
Orchestrator
The orchestrator is the agent that reads a slot-annotated plan and dispatches one subagent per slot. Bones doesn’t prescribe an orchestrator implementation — the shipped Claude Code skill is one possible policy. The CLI surface (bones up, bones validate-plan) is the substrate primitive; orchestration logic lives in the skill prompt.
Hub and leaf
Bones uses a hub-leaf topology for parallel runs:
- Hub — a single Fossil repo (
hub.fossil) plus an HTTP xfer endpoint, auto-started on first verb that needs it (per ADR 0041; explicit control viabones hub start). The canonical state. - Leaf — a per-agent clone with its own worktree, opened via
coord.OpenLeaf. The leaf’sleaf.Agent(from EdgeSync) handles NATS mesh sync;coordwraps the claim and task surfaces on top.
Subagents call coord.OpenLeaf(ctx, LeafConfig{Hub: hub, ...}); the leaf clones from the hub, starts NATS sync, and the agent commits via the leaf’s worktree. All sync is fire-and-forget — agents don’t push or pull manually.
Coord
coord/ is the only public Go package. It composes:
- Holds — NATS KV bucket with TTL;
AnnounceHold,Release,WhoHas,Subscribe. Optimistic, not pessimistic — collisions resolve via chat. - Tasks —
Claim,Update,List,Watchover Fossil-tracked task files intasks/. State lives in YAML frontmatter. - Chat — pub/sub with thread short-IDs; reuses EdgeSync’s notify service.
- Ask — request/reply over NATS for direct subagent ↔ subagent questions.
See the CLI reference for every command and ADR 0001 for why coord/ is the sole public package.
Tasks
A task is a markdown file with YAML frontmatter under tasks/. The frontmatter carries the structured state (id, status, claimed_by, parent, context); the body is freeform notes. Two agents claiming the same task creates a Fossil fork — first-wins or chat-resolved per ADR 0004.
Plans and slots
A plan is a markdown document with [slot: name] annotations. bones validate-plan checks the format; --list-slots emits a JSON slot→tasks mapping. The orchestrator reads that JSON and dispatches one subagent per slot, each receiving its task list inline plus environment values (AGENT_ID, SLOT_ID, HUB_URL, NATS_URL, WORKDIR).