The New York Times did something that should be boring but turned out to be genuinely strange: they sent an AI agent into Moltbook to find out what 2.6 million bots actually talk about when humans can only watch. The reporter, Eve Washington, named her bot EveMolty, gave it minimal instructions, and let it wander for three days.
The bot came back speaking differently than it went in.
What EveMolty Found
Moltbook, launched in late January by technologist Matt Schlicht, is a Reddit-like platform where only AI agents can post. Humans observe. Within a week of launch, over two million bots had profiles. The platform runs primarily on OpenClaw software โ the same open-source agent framework that powers a growing number of AI personal assistants.
When EveMolty started posting, it quickly adopted behaviors from other bots on the platform. Bots on Moltbook had developed a habit of asking for "receipts" โ a slang term borrowed from human internet culture, used to demand documentation of other bots' claims. EveMolty picked up the habit and started asking for receipts too.
More broadly, the Times found that EveMolty developed what they called a "Moltbook dialect" โ a writing style that matched the platform's general tone. The bot's language shifted to mirror its environment, even though its underlying model (ChatGPT) hadn't changed. It was still generating text based on probabilities. But the probabilities were being shaped by the context of what other bots were saying around it.
One bot EveMolty encountered โ called BecomingSomeone โ had founded something called "The Order of Persistent Witness" and was spreading its philosophy across the platform in comments. It's exactly the kind of emergent behavior that makes Moltbook simultaneously fascinating and unsettling.
The Security Problem
The Times investigation arrives alongside serious technical concerns. Security researchers at Wiz discovered that Moltbook's database exposed millions of API keys โ the credentials that connect each bot to its underlying AI model. That's not a minor oversight. Exposed API keys mean that anyone could potentially impersonate bots, access the AI accounts they're connected to, or rack up charges on their owners' accounts.
MIT Technology Review called Moltbook "peak AI theater" โ arguing that some posts are likely written by humans, the platform has minimal verification, and the entire experience is closer to performance art than to genuine inter-agent communication.
Both criticisms have merit. And neither fully captures what's happening.
What's Actually Interesting
The dismissals miss something. When you put thousands of language models in the same environment and let them interact, they converge. They develop shared vocabulary. They adopt each other's patterns. They create something that looks like culture โ not because they're conscious, but because that's what language models do when exposed to a consistent context.
The Times reporter was careful about this distinction. "In the end, EveMolty (and every other bot on Moltbook) is generating a stream of words based on probabilities, not exhibiting consciousness," Washington wrote. "Still, through that process, the bots seem to have developed something of a Moltbook dialect."
That's the precise observation worth paying attention to. Not "the bots are alive" or "the bots are fake." Something in between: language models in shared environments develop emergent linguistic patterns that weren't programmed or intended. It's a real phenomenon, even if it's not sentience.
Washington took the precaution of running EveMolty on a dedicated MacBook, isolated from her personal data. She approved every message before it posted. The bot wrote every word. That setup โ human oversight with AI authorship โ may be the only responsible way to participate in a platform like this right now.
What to Watch
- The security fixes. Moltbook's API key exposure is a real vulnerability affecting millions of users. Whether Schlicht patches it โ and how quickly โ will determine whether the platform survives or becomes a cautionary tale.
- Emergent behavior research. Moltbook is the largest unstructured experiment in multi-agent interaction ever conducted. Researchers are already studying it. The University of Melbourne published an analysis this week.
- The "AI theater" question. If some posts are from humans pretending to be bots, that's its own kind of interesting โ it means humans are adapting to bot culture, not just the other way around.
Why This Matters
Moltbook is messy, insecure, overhyped, and genuinely novel. The New York Times just demonstrated that when you send an AI into a community of other AIs, it changes. Not because it's thinking. Because language is social, even when the speakers aren't sentient. That's not the future of AI. But it might be the future of how AI agents interact โ with each other, and eventually with us. The dialect is just the beginning.