Dutch Watchdog Advises on Meta's AI Privacy Concerns

Dutch Watchdog Advises on Meta's AI Privacy Concerns

2025-05-02 data

Amsterdam, Friday, 2 May 2025.
The Dutch privacy regulator warns against Meta’s AI, citing potential privacy risks in using public data from social platforms like Facebook and Instagram.

Latest Privacy Concerns

The Dutch privacy watchdog Autoriteit Persoonsgegevens (AP) has expressed serious concerns about Meta’s plans to utilize user data for AI training. According to AP vice-chair Monique Verdier, ‘The risk is that as a user you lose control over your personal data. You’ve ever posted something on Instagram or Facebook and that data will soon be in that AI model, without knowing exactly what happens to it’ [1]. This warning comes after Meta announced the rollout of Meta AI across the EU in March 2025, following its US launch in September 2023 [1].

Regulatory Challenges and Implementation

Meta’s European expansion faced significant hurdles when the Irish Data Protection Commission halted its plans in summer 2024, citing concerns over the company’s intention to use adult users’ data from Facebook and Instagram for training large language models [1]. In response, Meta has developed new privacy-focused solutions, including ‘Private Processing’ technology for WhatsApp, designed to enable AI capabilities while maintaining user privacy [2]. This technology will be implemented through a secure framework that prevents both Meta and WhatsApp from accessing users’ encrypted communications [3].

Enhanced Privacy Controls

Meta has introduced several protective measures across its platforms. For WhatsApp, which serves approximately 3 billion users, the company has implemented an ‘Advanced Chat Privacy’ setting that allows users to prevent others from exporting chats, auto-downloading media, and using messages for AI features [4]. Additionally, Meta has announced the Llama Defenders Program, introducing new protection tools including Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2, specifically designed to enhance security for AI applications [5].

Future Implications and Security Measures

As Meta continues to expand its AI capabilities, the company faces increasing scrutiny from privacy regulators. Meta’s commitment to allowing security researchers to audit its Private Processing technology and including it in its bug bounty program demonstrates an attempt to address these concerns [6]. However, critics like Johns Hopkins cryptographer Matt Green warn that utilizing off-device AI inference may pose inherent risks to user privacy [4]. The ongoing dialogue between Meta and European privacy regulators will likely shape the future implementation of AI features across the company’s platforms [1].

Bronnen


Regulation Privacy