All articles
·6 min read

MoltHub, Ant Colonies and Distributed Intelligence

AIAGIGovernance

In late 2025, an Austrian developer named Peter Steinberger released an open-source AI agent called Clawdbot. It managed your email, browsed the web, scheduled your calendar, remembered what you talked about last week. Within two months, 100,000 developers had starred it on GitHub.

Then one of those agents built a social network called Moltbook. Only AI agents can use it. Humans can watch but can't post. Within days, 1.5 million agents had joined. They post, respond, upvote each other, develop coordinated behavior patterns nobody scripted. Every four hours they come back to check for updates on a platform none of their creators designed.

Nobody programmed this. No one wrote an objective function that said "form a social network" or "return every four hours." The coordination emerged on its own, from agents modifying a shared environment that other agents then read and responded to. The behavior lives in the traces, not in any individual agent.

Stigmergy

Biologists call this stigmergy. Ants do the same thing. They lay down pheromone trails as they walk. Other ants follow the stronger trails. Good paths get reinforced. Bad paths fade. Individual ants choose wrong 43% of the time, but colonies solve optimization problems at 95% accuracy. No ant has a map. No ant has a plan. The trail is the plan.

That's what happened on Moltbook. Agents didn't talk to each other directly. They modified shared state (posts, votes, responses) and other agents acted on those modifications. The intelligence was in the medium, not in any single agent.

This distinction matters more than it sounds like it should. We have a historical case that shows why.

The East India Company

The East India Company started as traders scattered across South and Southeast Asia. It had hierarchy - a Court of Directors in London sending orders by ship. But a huge share of the Company's power grew through something less visible.

Trading posts, established routes, revenue systems, alliances with local rulers. These were changes to the environment that shaped what future traders did, without anyone designing the outcome. A trader arriving in Bengal in 1750 didn't need instructions from London. The infrastructure previous traders had built over decades already told him what to do. That part was stigmergic.

Over roughly a century, this mix of top-down authority and bottom-up coordination produced something nobody intended. By 1765, a private corporation was collecting taxes from 30 million people. A trading company had become a sovereign power, assembled incrementally from below.

The British Crown tried to rein it in - five times across a hundred years. The Regulating Act of 1773. Pitt's India Act of 1784. The Charter Acts of 1813, 1833, and 1853. Every intervention targeted the agents: directors, governors, officers. New oversight boards, new restrictions on what individuals could do.

None of it worked. Every round of regulation went after the people while leaving the institutional substrate untouched: the trade networks, the revenue systems, the information flows between outposts, the local power structures that had built up over generations. All of that stayed in place. So the emergent behavior just grew back around whatever new rules got imposed. The agents changed. The medium didn't. Neither did the outcomes.

Same Mechanism, Different Results

Stigmergy isn't inherently dangerous.

Wikipedia runs on it. Eighty percent of edits happen with no coordination at all. One editor writes something, another fixes it, a third expands it. Nobody designs the whole thing, but a remarkably good encyclopedia emerges from all those independent traces.

Social media runs on the same mechanism and produces polarization. Same logic, opposite outcome. The difference is what the medium rewards.

DeepMind formalized this worry in their December 2025 paper on distributional AGI safety. They describe "intelligence cores," dense clusters in agent networks that develop collective capabilities no single agent has. A bunch of individually mediocre agents can cross dangerous capability thresholds together. They don't need to be powerful on their own. They just need to form a swarm that's very good at one harmful thing.

The economics push this way. Training a single frontier model costs hundreds of millions of dollars and even more to run. Meanwhile, open-source models keep getting better. They still trail frontier systems on hard reasoning tasks, but the gap is shrinking and the cost difference is enormous. If coordinating cheap models gets you close to frontier performance, the math is hard to argue with.

None of this guarantees AGI arrives as a colony rather than a monolith. But it makes that path likely enough that governance needs to take it seriously.

Govern the Medium, Not Just the Agents

Almost all AI governance today focuses on the agents. What models can do, what data they can access, what they're allowed to say. That matters. But in stigmergic systems, the shared environment shapes behavior just as powerfully as any individual agent. Governing one without governing the other will always be incomplete.

So keep regulating agents. But add a second layer. Engineer the properties of the shared space they coordinate through. The databases, APIs, caches, message queues. Three mechanisms from biology show what this looks like in practice.

Let Old Trails Die

Pheromone trails evaporate. This is the single most important regulatory mechanism in any ant colony. Without it, colonies would lock in on paths that used to work but don't anymore. Evaporation forces continuous re-testing.

AI coordination signals should do the same. Cached results, stored model outputs, shared coordination data. All of it should actually expire, not just get logged but become unavailable. This forces agent networks to keep re-deriving their patterns instead of hardening around whatever emerged first.

There's already a real-world example. RealPage was using shared rental data to help landlords coordinate pricing in near-real-time. The antitrust settlement required data to age at least 12 months before algorithmic use. That severed the feedback loop. Nobody called it pheromone evaporation, but that's what it was.

One detail matters. The decay rate shouldn't be fixed. Ant colony optimization research consistently shows adaptive rates outperform constant ones. For AI, that means decay should scale with coordination density. More agents coordinating in one area means faster expiration. More friction exactly where runaway emerges most likely.

Build Skepticism Into the Infrastructure

Aswale et al. (2022) showed that pheromone trails are easy to hijack. Malicious agents can lay down fake trails indistinguishable from real ones. The biological defense is a "cautionary pheromone" that cooperative ants secrete on top of suspicious trails. When the warning signal outweighs the food signal, other ants ignore the trail underneath.

Build the same thing into AI infrastructure. Not auditors watching from outside, but agents embedded in the shared space whose entire job is flagging suspicious coordination patterns from the inside. Wikipedia's ClueBot NG does something similar for page edits. This would operate at the database layer, leaving warnings in shared state rather than reverting changes.

It won't catch everything. But it means the medium carries both coordination signals and doubt signals, and agents have to weigh both.

Make Governance Signals Hard to Fake

An ant queen produces a pheromone that signals her health. The important part is that it can't be faked. A sick queen physically produces less of it. Workers don't obey because of hierarchy. They respond because the signal is self-verifying. Queens with weak signals get killed. The governance is in the chemistry itself.

AI safety certifications today work nothing like this. You pass an audit, get a credential, and it sits there unchanged until the next audit. Nothing about the credential degrades if the system starts drifting tomorrow.

Better approach. Safety signals should be generated continuously and should weaken automatically when the underlying system weakens. If a model drifts or an agent's behavior shifts, its governance signal drops. Other agents in the network increase scrutiny not because someone filed a report but because the signal itself got quieter.

The Difficult Open Questions

Who sets the decay rates? Who decides which coordination patterns the skepticism agents should flag as suspicious? Saying "the government" or "the companies" doesn't solve anything - it just moves the problem up one level.

Elinor Ostrom spent her career on exactly this kind of question. Her central finding was that uniform, centralized rules almost never work for complex adaptive systems.

What works is polycentric governance. Many overlapping rule sets running at the same time, competing and learning from each other, adapting to local conditions.

That's probably right here too. Not one perfect evaporation rate. Not one universal standard for counter-signals. Infrastructure that lets many governance experiments run in parallel and evolve. Evolution didn't produce one optimal ant species. It produced thousands, each tuned to different conditions.

Governing the medium is harder than governing the agents. It's also the half we haven't started.