AI Bot Swarms Could Manipulate Elections by 2028, Leading Researchers Warn

AI Bot Swarms Could Manipulate Elections by 2028, Leading Researchers Warn

2026-01-24 data

Amsterdam, Saturday, 24 January 2026.
Twenty-two experts from top universities warn that sophisticated AI swarms could fundamentally threaten democracy by the 2028 US presidential election. These coordinated networks can control thousands of social media accounts simultaneously, creating synthetic consensus and manipulating public opinion at unprecedented scale. Unlike traditional bots, AI swarms adapt in real-time, mimic human behavior perfectly, and infiltrate communities with tailored messaging that’s virtually undetectable.

The Science Behind AI Swarms

A consortium of twenty-two leading experts published their findings in Science on January 22, 2026, outlining how malicious AI swarms represent a fundamental shift in information warfare [1][3]. These systems combine large language model reasoning with multi-agent architectures, enabling them to coordinate autonomously, infiltrate communities, and fabricate consensus with minimal human oversight [1]. Unlike the Russian Internet Research Agency’s 2016 operation that employed hundreds of human operators to manually spread disinformation [3], modern AI swarms can operate thousands of personas simultaneously while adapting in real-time to audience reactions [1][2].

Real-World Evidence of AI Manipulation

The threat has already materialized in multiple democratic processes. In 2024, early versions of AI-powered influence operations were deployed during elections in Taiwan, India, and Indonesia [1][3]. Puma Shen, a Taiwanese Democratic Progressive Party MP, reported that in the last two to three months of 2025, AI bots significantly increased their engagement with citizens on Threads and Facebook [1]. The U.S. Department of Justice disrupted a Russia-linked, AI-enhanced bot farm in July 2024 that controlled 968 X accounts impersonating Americans [2]. A 2025 analysis estimated that approximately 0.2 of accounts and posts in major online conversations were automated [2].

How AI Swarms Infiltrate and Manipulate

Daniel Thilo Schroeder, a research scientist at the Sintef research institute in Oslo, explains that AI swarms can navigate online social media platforms, email systems, and messaging channels with frightening ease [1]. These systems map social network structures and infiltrate vulnerable communities with tailored appeals designed to gain followers [4]. They employ human-level mimicry to evade detection, using photorealistic avatars and context-appropriate slang while harvesting real-time engagement data to self-optimize through millions of micro-A/B tests [4]. The swarms can engineer synthetic consensus by seeding narratives across niches and boosting the illusion of agreement through coordinated likes and shares [4].

Commercial Tools and Democratic Threats

The commercial infrastructure supporting large-scale manipulation is already emerging. Doublespeed, backed by Andreessen Horowitz, advertises capabilities to “orchestrate actions on thousands of social accounts” while mimicking “natural user interaction” [2]. The Vanderbilt Institute of National Security has documented “GoLaxy” as an AI-driven influence machine [2]. These tools enable what researchers term “LLM grooming” - poisoning the training data for future AI models by flooding the web with fabricated content [2][4]. Michael Wooldridge, professor of AI foundations at Oxford University, warns that deploying virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion is “entirely plausible” [1][3].

Proposed Solutions and Observatory Framework

The research consortium proposes establishing a distributed “AI Influence Observatory” ecosystem to standardize evidence and improve collective response capabilities [2][3][4]. Frank Schweitzer from ETH Zurich, one of the contributing experts, notes that targeted vote influencing is possible even in Switzerland, though high voter turnout and diverse media use could provide some protection [5]. The proposed solutions include continuous real-time monitoring systems that scan for statistically anomalous coordination patterns, mandatory transparency requirements for platforms, and “AI shields” that allow users to identify and filter posts with high swarm-likelihood scores [4]. However, Nina Jankowicz, CEO of the American Sunlight Project, warns there is “very little political will to address the harms AI creates,” making AI swarms likely to become reality soon [3].

Bronnen


AI bot swarms election security