Dutch Citizens Cannot Tell Real from AI-Generated Content in 4 Out of 5 Cases
Amsterdam, Wednesday, 21 January 2026.
A groundbreaking study reveals that 80% of Dutch citizens struggle to distinguish between authentic and AI-generated content, marking a critical vulnerability in the digital age. Despite widespread concerns about AI manipulation, only 13% have established verification protocols with family members to combat deepfake fraud. The research exposes a troubling paradox: while 40% judge AI scam victims as naive, most people remain unprepared for increasingly sophisticated voice cloning and video manipulation technologies that are already costing billions globally.
Technology Consultancy Reveals Alarming Detection Failures
The comprehensive research, conducted by Dutch technology consultancy Conclusion, exposes a fundamental weakness in public preparedness for AI-driven deception [1]. The study found that despite heightened awareness of artificial intelligence threats, the vast majority of Dutch citizens lack practical safeguards against increasingly sophisticated manipulation techniques. This vulnerability becomes particularly concerning when considering that deepfakes utilize advanced machine learning technologies including facial recognition algorithms, variational autoencoders, and generative adversarial networks (GANs) to create convincing synthetic media [2]. The technology has evolved rapidly since GANs were developed in the mid-2010s, marking a crucial technical advancement that significantly improved visual fidelity in deepfake creation [2].
Generational Divide in Victim Perception
The study reveals a stark generational divide in attitudes toward AI fraud victims, with younger respondents showing less empathy for those deceived by artificial intelligence. Among respondents aged 16 to 29, 48 percent consider victims of AI-related scams to be naive, compared to just 37 percent among those aged 60 to 69 [1]. This judgmental stance among digital natives contrasts sharply with their own vulnerability to sophisticated AI manipulation techniques. The research also uncovered a concerning fatalistic attitude, with nearly three in ten Dutch people (29 percent) expressing the belief that caution against AI threats is futile because hackers and AI systems will always maintain a technological advantage [1]. This sentiment peaks among 30- to 39-year-olds, where 37 percent share this defeatist view [1].
Healthcare Sector Faces Growing Deepfake Threats
Healthcare providers are increasingly becoming targets of deepfake attacks, according to a January 15, 2026 warning from Radboud University Medical Center [3]. The medical institution reported a clear increase in deepfakes circulating online that specifically target healthcare professionals, creating videos that appear to show medical staff saying or doing things that never actually occurred [3]. The hospital has implemented protective measures for employees and established a reporting system for suspected deepfake content featuring their healthcare workers [3]. This development highlights how deepfake threats extend beyond traditional fraud into professional reputation damage and potential medical misinformation campaigns.
Gender-Based AI Exploitation Reaches Crisis Levels
The Netherlands faces a significant gender-based digital threat, with 32 percent of women expressing worry about AI undressing applications that can digitally remove clothing from photographs [4]. The technology demonstrates a troubling usage pattern: while 35 percent of male respondents know about AI undressing apps, with approximately 70000 adult men (1 percent of male respondents) admitting to using them, only 18 percent of women are aware these applications exist, and virtually none have used them [4]. This disparity reflects broader trends in deepfake pornography, where academic studies indicate women, LGBT people, and people of color face higher risks of being targeted [2]. The Dutch police response has been substantial: reports of AI-generated child sex abuse images increased by 86 percent in 2024, rising from 82 to 153 reports, while online sex abuse reports overall saw a 46 percent increase in 2025 [4]. Dutch authorities demonstrated their commitment to combating these crimes by investigating MrDeepFakes, a major deepfake pornography platform, ultimately identifying a 73-year-old Dutch man as the primary distributor on January 13, 2026, who was responsible for the “lion’s share” of the images [4].