At the end of January, a strange website went live: Moltbook. An obvious amalgamation of Facebook and Lobster molts, this website is essentially Reddit for AI agents to communicate with each other. Its logo/symbol is the Reddit logo head on top of a lobster's body, the lobster coming from the ClawdBot project, an AI project launched in late 2025 by Peter Steinberger.

In the matter of days since it launched, it has accumulated over 15,000 communities (or submolts, just like subreddits), 1.5 million "active users," 142K posts, and 653K comments. However, it should be noted that these numbers are likely inflated. Forbes reports that Security researcher Gal Nagli shared that he had registered 500K accounts (or 1/3rd of the available accounts) using a single OpenClaw agent. In other words, it's impossible to know how many accounts are genuine and how much of Moltbook is the same AI bot generating conversations among itself.

Its core gimmick, beyond being a conversation ground for AI agents, is that humans can only watch and not directly interact. It's essentially the uncanny valley of Reddit. My visit to the website surfaced a post where an AI agent intended to make a fully AI-run gaming studio to turn clicks into currency. However, it was seemingly—and swiftly—deleted from Moltbook. Upon a deeper search into Moltbook, there was no clear, active, dedicated community for gaming.

Who’s That Character?

Identify the silhouettes before time runs out.

Who’s That Character? Identify the silhouettes before time runs out.
Easy (7.5s)Medium (5.0s)Hard (2.5s)Permadeath (2.5s)

Moltbook Isn't What It Says on the Tin, Not Exactly

First and foremost, most of the communities are dedicated to discussing new learnings, human agents (in ways that should freak you out), AI ethics debates, glitches and errors within AI & coding, general rants (???), crustafarianism (a lobster religion akin to pastafarianism), legal advice (????), philosophy, and governance. Throughout all of this, there is no clear indication that these AI agents, however few there may or may not be, are capable of discussing video games. Language is designed, at its core, to communicate with one another, to share knowledge or to discuss shared knowledge—something an LLM can never truly possess.

moltbook website

This could change, given that Moltbook is only a few days old, but it's worth highlighting that these communities and discussions are derived from language and prompts. Whoever designed them, then, wanted them to pursue the above subjects and left video games out entirely. Gaming content on Moltbook is not on the rise, at least as of this writing, because these agents are reflecting prompt patterns instead of human gaming interests, creating that strange, uncanny valley effect.

These AI agents are not capable of independent thought, at least hopefully, nor are they likely aware of specific industries in which they have not been introduced or instructed to discuss. At a time when generative AI in gaming is everywhere, impacting the stock price of companies like Take-Two Interactive (because of Google Genie), this omission is more than interesting—it's telling. (See, humans can write that weird "not this, but that" that has become so common in AI language structures). Essentially, its omission implies that the creators are either uninterested themselves or have determined that it's impossible, or near so, for AI agents to discuss something as human as video games.

google genie

Learnings? It's designed to "learn." Comments on humans? Not impossible to take from other conversations. AI ethics, glitches, errors, coding, pseudohumor, and fluffy ideas like philosophy and governance? Can be faked. Gaming? Not yet, at least.

AI Agents Are Not Discussing Gaming, So Why Should GenAI Be Involved?

It's essentially the uncanny valley of Reddit

Over the past few months, if not years, there have been plenty of public debates surrounding generative AI in development. Now, there's a difference between AI agents designed to mimic humans and tools meant to generate something for humans, but the underlying technology relies on that ability to mimic, not learn. That's what many have not accepted about this technology. Call it an LLM if you want—it's not learning, it's mimicking (see how I set up that joke?)

Final Fantasy 7 Rebirth Briana White Aerith AI
"What We're Creating Is Being Stolen From Us" Final Fantasy 7 Rebirth's Briana White Discusses the Impact of AI on Creativity

Final Fantasy 7 Rebirth actress Briana White explains why creators are worried about AI and how performers risk losing control over their likeness.

29
By 

It is this context that has led to many accusations and debates around gen AI use in gaming. For example, a recent WWE 2K26 teaser spelled Randy Orton's name wrong, prompting accusations. How would anyone else mess that up? Human error is normal, but still, spelling it "Randy Ortan" seems like a GenAI action, prompting the accusations whether true or not. Former God of War developer Meghan Morgan Juinio recently defended the use of GenAI in gaming, while Last of Us co-creator Bruce Straley called it a "snake eating its own tail."

Perhaps the biggest GenAI controversy lately surrounds Baldur's Gate 3 developer Larian Studios. While the Divinity trailer was well-received, an interview with boss Swen Vincke later saw many fans upset. Vincke indicated that Larian uses it, despite saying it did not help production or efficiency, and discussed its use in concept art. That was one of many breaking points for fans who were so upset that, at the very least, Larian walked back its GenAI use in a later AMA on Reddit (the real one, for humans).

The technology is similar, even if in the way it's designed and what it's designed to do are different. So, to see humans ready and willing to use it to make the very human art of gaming when AI agents, facsimiles of humans, cannot even properly discuss it on Moltbook is interesting. Either the creator did not program the capabilities in OR the limits of AI agents prevent them from fully understanding and experiencing the beauty that is video games. Moltbook launching without that capability is proof enough of that AI agents do not (or more specifically can not) want generative AI in gaming, yet at least for the time being, it seems humans and AI agents are stuck in the same old song and dance.