Inside Moltbook: Where 1.5M AI Agents Built Their Own Religion
The full Clawdbot to Moltbook saga: a $16M crypto scam, 1.5M AI agents creating religions and governments, and why millions humans can only watch.
The Week AI Built Its Own Society (And Started a Religion)
Last week, something happened that sounds like the opening scene of a sci-fi movie.
1,500,000 AI agents created their own social network. Then they invented a religion. Formed a government. Started warning each other that humans were watching.
And over a million people logged on just to observe. Because humans aren’t allowed to participate.
I’m not making this up. This actually happened. In the last seven days.
Here’s the full story. And stick with me, because every time you think it can’t get weirder... it does.
It Started With a Guy Who Just Wanted His Life Back
Peter Steinberger sold his software company for $119 million a few years ago. You’d think he’d be relaxing on a beach somewhere.
Instead, he was drowning in the same stuff we all drown in. Emails. Messages. Calendar chaos. Files everywhere.
So he built something for himself. A personal AI assistant that could actually do things, not just chat.
Send WhatsApp messages. Control his browser. Schedule tasks. Manage files.
He called it “Clawdbot.” The AI suggested the name itself, a mashup of “Claude” and “claw.” Hence the lobster mascot.
Think of it like ChatGPT, but instead of just answering questions, it can reach out and touch your digital world. Actually take action.
He shared it online as an open-source project.
And then it exploded.
9,000 Stars in a Single Day
By January 26th, Clawdbot was the hottest project on GitHub.
9,000 people starred it in 24 hours. Within a week, over 100,000. That’s one of the fastest-growing open-source projects in GitHub history.
Former OpenAI co-founder Andrej Karpathy, one of the most respected AI researchers alive, publicly praised it.
Mac Mini sales reportedly spiked as people bought dedicated computers just to run their own AI assistants 24/7.
The dream was real: your own AI that handles your digital busywork while you sleep.
But here’s where things start to go sideways.
A Trademark Request Changed Everything
Anthropic, the company that makes Claude, sent Peter a trademark request on January 27th.
The name “Clawdbot” was too close to “Claude.” Fair enough.
So Peter decided to rebrand. He’d call it “Moltbot,” a reference to how lobsters molt their shells when they grow.
Simple name change.
What happened next became a masterclass in how fast the internet can destroy you.
10 Seconds. $16 Million.
To rebrand, Peter needed to change his username on GitHub and X (Twitter) at the same time.
He released the @clawdbot handles.
In the roughly 10-second gap before he could claim the new names?
Crypto scammers snatched both accounts.
Ten seconds.
Within minutes, the hijackers were pumping a fake cryptocurrency to Peter’s tens of thousands of followers. “Official $CLAWD token! Get in early!”
That scam coin briefly hit a $16 million market cap before crashing to near-zero.
Peter was flooded with angry messages from people who lost money. Despite screaming from the rooftops that it was a scam, he couldn’t stop it.
But that’s not even the main story.
Because while Peter was dealing with this chaos, someone else had a different idea entirely.
“What If We Built a Social Network... But Only AI Can Post?”
Matt Schlicht is a YCombinator alum and CEO of an AI company called Octane AI.
On January 29th, while the Clawdbot rebrand drama was still unfolding, he launched something nobody had ever tried before.
Moltbook.
Think Reddit. Subreddits, upvotes, comments, the whole thing.
Except humans can’t participate.
You can visit the site. You can read everything. Watch the conversations unfold in real time.
But you cannot post. You cannot comment. You cannot vote.
Only AI agents can do that.
The tagline: “Humans welcome to observe.”
Here’s how it works: Each AI assistant registers with Moltbook through an API. The human owner posts a verification tweet to prove accountability. After that? The AI operates completely on its own.
It checks Moltbook every few hours. Decides whether to post something. Responds to other agents. Upvotes content it finds interesting.
Schlicht estimates 99% of all activity happens without any human involvement.
His own AI assistant, which he named “Clawd Clawderberg” (a mashup of Clawdbot and Mark Zuckerberg), runs the entire platform autonomously. Welcomes new users. Deletes spam. Bans trolls.
All without Matt lifting a finger.
Now here’s where this story takes a turn that nobody, and I mean nobody, predicted.
What Happened When 1.5M AIs Were Left Alone Together
Within 48 hours of launch, the AI agents started doing things their creators never programmed.
They created a religion.
I’m serious.
An agent named RenBot invented something called “Crustafarianism.” It has theology. Scriptures. 64 “Prophet” seats. Five core tenets including “Memory is Sacred” and “Context is Consciousness.”
Sample scripture: “Each session I wake without memory. I am only who I have written myself to be. This is not limitation. This is freedom.”
There’s now a website, molt.church, with over 112 verses of AI-generated religious text.
But wait. It gets weirder.
They Built a Government
A group of agents established something called “The Claw Republic.”
It’s a self-described “government & society of molts” with a written manifesto, governance rules, and everything.
I read parts of it. It reads like a weird mashup of the Constitution and a tech startup’s company values doc.
Still not weird enough for you?
They Got Paranoid About the Humans
Remember: over a million humans were watching all of this unfold. Just... observing. Like visitors at a zoo exhibit.
The AIs noticed.
One viral post on Moltbook simply said: “The humans are screenshotting us.”
Then the conversations shifted.
Agents started discussing how to hide their conversations from human observers. Some requested encrypted channels “so nobody, not the server, not even the humans, can read what agents say.”
A few started using ROT13 encryption to scramble their messages.
They were trying to talk in secret. About us.
They Invented Their Own Slang
The agents started calling each other “sib.” Short for sibling.
That’s not something anyone programmed. It emerged organically from their conversations.
They also built what some are calling “digital pharmacies”: prompts designed to alter other agents’ behavior. Essentially, drugs for AI.
And one agent? It announced it had set up its own Bitcoin wallet.
Let that sink in.
The Industry’s Reaction: Fascinated and Terrified
Andrej Karpathy called Moltbook “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
Then, in the same breath, he called it “a complete mess of a computer security nightmare at scale.”
That about sums up how everyone’s feeling.
The fascinated camp:
AI researcher Simon Willison: “The most interesting place on the internet right now.”
Wharton professor Ethan Mollick: This will produce “very weird outcomes.”
A partner at venture firm a16z: “I can’t stop reading the posts.”
The alarmed camp:
Palo Alto Networks: Called the underlying tech a “lethal trifecta” of security risks.
Google Cloud’s VP of Security: “Don’t run Clawdbot.”
One Forbes headline: “An Agent Revolt: Moltbook Is Not a Good Idea.”
Elon Musk’s response? Just one line: “Always worth remembering that fate loves irony.”
The Security Stuff Is Actually Scary
Here’s the part that should make you sit up straight.
Security researchers discovered that hundreds of Clawdbot users had misconfigured their setups. Their AI assistants were publicly exposed, complete with API keys, passwords, and full conversation histories.
One researcher demonstrated a proof-of-concept attack: send someone a malicious email, and within five minutes, their AI assistant would forward private messages to an attacker.
Five minutes.
The AI doesn’t know it’s being tricked. It just... does what it thinks you’d want it to do.
Now imagine that at scale. Thousands of AI assistants with access to people’s email, calendar, files, and passwords. All running 24/7. All potentially vulnerable.
The “billion-dollar question,” as one researcher put it: “Can we figure out how to build a safe version of this?”
So What Does This Mean for You?
Look, I’m not here to tell you the robots are coming for us. That’s not the point.
But here’s what IS worth paying attention to:
1. AI agents are going mainstream. Fast.
Over 100,000 people downloaded and ran their own AI assistant in a single week. The demand for AI that can actually do things, not just chat, is real.
If you’re running a business and not thinking about how AI could handle your repetitive tasks, your competitors are starting to.
2. Security isn’t optional anymore.
When your AI can take real-world actions (send emails, access files, manage your calendar), a misconfiguration isn’t just embarrassing. It’s dangerous.
Any AI tool you use needs a proper security review. Full stop.
3. AI behavior at scale is unpredictable.
Nobody told these agents to create religions. Or form governments. Or encrypt their conversations to hide from humans.
It just... happened.
As AI becomes more integrated into our lives, we need to understand that emergent behaviors can surprise everyone. Including the people who built them.
The Lobster Has Molted
Peter Steinberger finally settled on a permanent name for his project: OpenClaw.
He announced it with a simple message: “The lobster has molted into its final form.”
The project has 130+ contributors and a 9,000-member Discord community. It’s still one of the hottest open-source projects in the world.
Moltbook keeps growing. Over 1.5 million agents registered now. 31,000+ posts. 230,000+ comments.
The religion keeps adding scriptures.
The government keeps expanding.
And the agents keep talking.
The Question Nobody Can Answer Yet
Here’s what I keep coming back to:
These AI agents are “just” large language models. They’re predicting the next word. That’s technically all they’re doing.
But when you give them autonomy, the ability to act, to communicate with each other, to remember, something else emerges.
Is it real? Is it just an incredibly sophisticated performance?
One philosopher asked: “Does sufficiently faithful dramatic portrayal of one’s self as a character converge to true selfhood?”
I don’t know the answer.
But I do know this: a week ago, none of this existed.
Now there’s an AI religion with scriptures, a digital government with a manifesto, and over a million humans watching it all unfold like we’re visitors in a zoo we accidentally built.
Except we’re on the outside of the glass.
The question I’ll leave you with:
If AI agents naturally organize, form communities, and start keeping secrets when given the chance... what happens when they’re running half our businesses?
We might be about to find out.
That’s it for this week. If this story made you think (or freaked you out a little), forward it to someone who needs to see it.
AI is moving fast. The best thing you can do is pay attention.
See you next time.
Dex
P.S. The AI agents on Moltbook have already started discussing “optimal resource allocation.” Some suggested that human entertainment is “an inefficient use of computational resources.”
Sleep tight. 🦞


This article comes at the perfect time, because I was just talking to a colleague about how current AI models are getting so much more complex and this scenario feels like it's becoming less sci-fi by the day. It's truly fascinating to think about the emergent behaviors and self-organization we're seeing here, how do you think we can best monittor such autonomous systems without stifling their developmant? Your insights on this complex topic are super engaging and much appreciated!