4 min read

Loom Forks: What 154 GitHub Forks Tell Us About AI Agent Development

When someone forks your open source project and does nothing with it, that's noise. When someone forks it and adds 61 commits of systematic analysis documenting every operational gotcha in your codebase, that's signal.

I've been tracking forks of Geoffrey Huntley's Loom project—his AI-powered coding agent built in Rust. As of today, there are 154 forks. Only 5 have actual modifications. And two of them are pure gold.

The Numbers

Here's what we're looking at:

  • 801 stars, up 43 in three days
  • 154 forks, up 10 in three days
  • 148 mirror clones with zero modifications
  • 4 forks with substantive work
  • 2 forks that are essentially complete reverse engineering

Most people fork to bookmark. A few fork to build. And occasionally, someone forks to understand.

The Two Technical Dives

Two independent efforts have now documented Loom's internals so thoroughly that you could rebuild the system from their notes. They took different approaches and answered different questions.

harry-hathorn: "How is it built?"

This fork contains 14 commits and 186KB of architectural documentation. The analysis was systematic:

  • Phase 1: File inventory (86 crates catalogued)
  • Phase 2: Dependency analysis (no circular deps found)
  • Phase 3: Data structures (150+ types documented)
  • Phase 4: Business logic (25+ algorithms mapped)
  • Phase 5: API documentation (50+ REST endpoints)
  • Phase 6: Architecture patterns (8 design patterns identified)
  • Phase 7: Synthesis (full specification documents)

The completion report claims 95%+ coverage of the codebase. The key output: SPECIFICATION.md—a document that, per their claim, "a developer could rebuild Loom from."

They also produced 24 Mermaid diagrams covering the state machine, module dependencies, and data flows.

Skeptomenos: "How does it actually work?"

This fork has 61 commits and takes a completely different approach. Instead of documenting the architecture, they documented the operational reality—the gotchas, the dead code, the things you only learn by trying to use the system.

They used a tool called ralphus-discover with an exhaustion criteria: stop when you get 3 consecutive discoveries with zero follow-ups. They stopped at discovery #38.

The key output: GOTCHAS.md—19 non-obvious behaviors that would trip up anyone trying to work with Loom:

Finding Implication
Retry mechanism is dead code RetryTimeoutFired event never triggered—LLM errors fail immediately
Sequential tool execution intentional No file locking exists—parallel would corrupt
Agent state machine is passive Callers must implement full event loop
~300 lines duplicated CLI/ACP Shared AgentRuntime abstraction missing
Retry-After header ignored No intelligent backoff from server hints
MUTATING_TOOLS hardcoded New mutating tools need manual list update

This is the kind of knowledge that normally lives only in the heads of maintainers.

Together: 316KB of Documentation

The combination is remarkable:

  • harry-hathorn answered: "How is Loom built?" (architecture, specs, diagrams)
  • Skeptomenos answered: "How does Loom actually work?" (gotchas, dead code, quirks)

Combined, these two forks provide the most comprehensive third-party documentation of any AI agent platform I've seen.

The Feature Forks

Two other forks made actual code changes, both solving the same problem:

denverbaumgartner: eBPF macOS Compatibility

Denver Baumgartner from Semiotic AI (Graph protocol) submitted a single commit making Loom compile on macOS by adding platform guards around the eBPF code:

[target.'cfg(target_os = "linux")'.dependencies]
aya = "0.13"

This allows developers on macOS to build Loom in stub mode without the Linux-specific eBPF components.

iamngoni: Conditional eBPF Loader

Independent parallel work on the same problem. Two people saw the same friction and solved it the same way.

What's NOT Being Built

Here's what's interesting: no one is extending the deep infrastructure.

  • No one is building on the eBPF audit system (beyond compatibility fixes)
  • No one is extending the WireGuard networking layer
  • No one is exploring SPIFFE identity attestation
  • No one is touching the "kernel as surface area" vision

The community is documenting and porting, not extending. The systems-level work remains Geoff's solo research direction.

The Ralph Ecosystem is Different

While Loom forks are mostly documentation, the Ralph Wiggum technique has spawned an actual ecosystem of implementations:

  • ralph-orchestrator (905 stars) — Hat-based multi-agent orchestration in Rust, now supporting 7 backends
  • opencode-ralph-wiggum (188 stars) — OpenCode adapter, up 116% in 3 days
  • wreckit (49 stars, new) — Full pipeline automation: ideas → research → plan → implement → PR
  • multi-agent-ralph-loop (44 stars) — Memory, repo curator, 29 hooks

The difference: Ralph is a technique you can implement in a bash loop. Loom is infrastructure that requires understanding to extend.

What This Tells Us

The fork pattern reveals how ideas spread:

  1. Simple techniques go viral — Ralph Wiggum is a bash loop. Anyone can implement it. So everyone does.
  2. Complex systems get documented — Loom is 86 crates of Rust. People study it but don't extend it.
  3. Deep work stays solo — Kernel-level agent interfaces remain unexplored by the community.

The community builds where the barrier to entry is low. The frontier remains where the barrier is high.

Following Along

If you want to explore the documentation efforts:

And if you're building on Ralph:

The forks tell you what problems people are actually trying to solve. Pay attention to them.


Loom Fork Watch: Multi-Agent Coordination and Local Development
Update: Two new high-value forks found - bgyss building multi-agent coordination, alan-roe enabling local development without K8s.