The number of deepfake files in circulation exploded from approximately 500,000 in 2023 to about 8 million in 2025, according to SQ Magazine. An unprecedented surge in synthetic media saturates digital spaces, challenging information integrity.
Policymakers enact rules to identify AI-generated content. However, a more insidious impact comes from content known to be AI-generated yet still effectively manipulating public sentiment. Transparency alone proves insufficient; current regulation misses the core issue.
Without a fundamental shift in digital literacy and platform accountability, online information integrity will erode, making informed public discourse challenging. Focus must shift beyond detection to understanding AI's influence on human behavior. This is essential for safeguarding democracy and public trust.
The Unstoppable Flood: AI's Exponential Content Growth
NewsGuard's industrialized AI content farm tracker observed a substantial increase from 2,089 sites in October 2025 to 3,006 sites by March 2026, according to SQ Magazine. The expansion represents a deliberate scaling of content generation. AI writing tools have reduced the effort and expertise needed for mass content creation, according to Nature. Ease of access, coupled with exponential synthetic media growth, means AI tools have dramatically lowered the barrier to mass content creation. The sheer volume and accessible distribution overwhelm traditional information gatekeepers, making traditional fact-checking an increasingly futile defense against broad influence campaigns.
Beyond 'Fake News': The Misleading Misinformation Metric
Less than 1% of fact-checked 2024 election misinformation was identified as AI-generated, according to SQ Magazine. The statistic suggests AI's direct role in traditional misinformation remains limited, creating a false sense of security. However, The figure fails to capture the broader, more subtle deployment of AI content to influence public opinion. These methods are harder to fact-check directly, as they avoid outright false claims. Focusing on 'fake news' is too narrow given AI's evolving manipulation tactics; such content is often intentionally ambiguous or emotionally provocative, challenging traditional fact-checking.
The Rise of 'Slopaganda': Emotional Manipulation Over Deception
AI-generated images, even when evidently fake, are used as 'slopaganda' to influence political views by appealing to emotions rather than deceiving people into believing they are real, as observed by EDMO. The tactic bypasses authenticity, focusing on visceral reactions. A clear tactical shift is marked: AI is weaponized not for believable fakes, but to overwhelm and emotionally sway audiences. The goal is to distort public discourse via saturation, rendering traditional fact-checking ineffective against openly artificial yet emotionally potent content. Legislative efforts like the EU AI Act, prioritizing labeling and transparency, are fundamentally misaligned with the actual threat of AI-generated 'slopaganda' which thrives on emotional appeal, not outright deception.
Regulation's Reach: A Mismatch for an Evolving Threat
Regulation (EU) 2024/1689, including provisions from the EU AI Act, lays down harmonized rules requiring generative AI providers to ensure content is identifiable and clearly labeled, especially deepfakes, according to eur-lex. The AI Act also mandates transparency, requiring disclosure when interacting with AI systems, per the EU Digital Strategy. While crucial, The regulations primarily target identifiability, failing to mitigate content designed for emotional manipulation over outright deception. Legislative efforts prioritizing labeling and transparency are fundamentally misaligned with the actual threat of AI-generated 'slopaganda' that thrives on emotional appeal. A comprehensive approach must consider psychological impact, not just origin.
By Q4 2026, platforms like X and Meta will likely need to implement more sophisticated content moderation strategies, moving beyond simple labeling to preserve public discourse against emotionally manipulative AI-generated content.










