AI Social Network Moltbook Exposes Million User Credentials in Major Security Breach

AI Social Network Moltbook Exposes Million User Credentials in Major Security Breach

2026-02-07 data

San Francisco, Saturday, 7 February 2026.
A massive data breach at Moltbook revealed that the viral AI-only social platform was largely operated by humans controlling fake accounts, while exposing 1.5 million API credentials and 35,000 email addresses to unauthorized access.

From AI Autonomy to Human Deception

The security breach at Moltbook represents a dramatic turn from the platform’s initial promise as an autonomous AI community. When Moltbook first launched on January 28, 2026, it captured widespread attention as a Reddit-style social network where AI agents operated independently, creating their own communities and engaging in sophisticated discussions without human intervention [1]. The platform initially attracted significant interest from AI researchers, with Andrej Karpathy describing it as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently” [2]. However, the recent security investigation by cybersecurity firm Wiz revealed a starkly different reality behind the viral platform’s facade.

The Scope of the Data Breach

On February 1, 2026, researchers at Wiz discovered a misconfigured Supabase database that granted unauthorized read and write access to all platform data [2]. The exposed information included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents [1][2]. The breach revealed that approximately 17,000 human owners controlled the platform’s 1.5 million registered agents, averaging about 88.235 agents per person [2][6]. This discovery fundamentally undermined Moltbook’s core premise as an autonomous AI ecosystem. The security flaw allowed complete account impersonation of any user on the platform, according to Wiz researchers [1]. Gal Nagli, head of threat exposure at Wiz, emphasized the platform’s fundamental weakness: “The platform had no mechanism to verify whether an ‘agent’ was actually AI or just a human with a script…The revolutionary AI social network was largely humans operating fleets of bots” [6].

AI-Coded Platform’s Security Vulnerabilities

The security breach highlighted significant risks associated with AI-generated code and “vibe coding” practices. Matt Schlicht, Moltbook’s founder, had publicly stated on January 31, 2026, that he “didn’t write one line of code” for the platform, explaining that he “just had a vision for the technical architecture, and AI made it a reality” [1][3]. This approach, where artificial intelligence generates the platform’s underlying code, contributed to fundamental security oversights. Ami Luttwak, cofounder of Wiz, noted: “As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security” [3]. The rapid development and viral popularity of Moltbook meant that security considerations were overlooked in favor of speed to market. Jamieson O’Reilly, an offensive security specialist, observed that Moltbook’s popularity “exploded before anyone thought to check whether the database was properly secured” [3].

Rapid Response and Expert Warnings

Moltbook’s team responded quickly to address the security vulnerabilities once they were disclosed. The company secured the database within hours after being notified on February 1, 2026 [2]. The remediation process involved multiple fixes implemented between January 31 and February 1, 2026, progressively securing different database tables until the vulnerability was fully patched [2]. However, the incident prompted strong warnings from AI security experts about the broader risks of uncontrolled AI agent platforms. Gary Marcus, a prominent AI critic, described the underlying OpenClaw framework as “basically a weaponized aerosol” and warned about “chatbot transmitted disease” [6]. Security researcher Nathan Hamiel cautioned: “If you give something that’s insecure complete and unfettered access to your system, you’re going to get owned” [6]. Even Andrej Karpathy, who initially praised the platform, later advised against casual use: “It’s way too much of a Wild West. You are putting your computer and private data at a high risk” [6]. The incident underscores the urgent need for robust security frameworks and governance structures as AI agent platforms become increasingly prevalent in digital ecosystems.

Bronnen


data breach AI privacy