Dutch Companies Dodge AI Rules by Mislabeling Systems as Basic Algorithms
The Hague, Thursday, 5 March 2026.
The Dutch Data Protection Authority reveals companies are deliberately misclassifying AI systems as regular algorithms to avoid stricter regulations. This regulatory gaming undermines the EU AI Act’s enforcement, with high-risk AI applications in recruitment and healthcare escaping proper oversight. The authority warns this could trigger new discrimination scandals similar to the childcare benefits affair that wrongly targeted thousands of families five years ago.
AI Impact Barometer Shows Alarming Deterioration
The Autoriteit Persoonsgegevens released its sixth edition of the Rapportage AI & Algoritmes Nederland on March 4, 2026, revealing a significantly deteriorating landscape for AI governance in the Netherlands [1][2]. The authority’s AI Impact Barometer, which tracks nine critical indicators, now shows four indicators marked as red, compared to only two in the previous report [2][3]. The watchdog specifically identified insufficient progress in establishing frameworks and powers for AI supervision, developing harmonized and practically applicable standards, improving registration and transparency of algorithms and AI systems, and enhancing visibility of incidents with embedded lessons learned [2].
Strategic Misclassification Undermines Regulatory Compliance
Companies are systematically attempting to circumvent the European AI Act’s stringent requirements by deliberately misclassifying their artificial intelligence systems as ordinary algorithms [1][4]. Joost van der Burgt, who leads AI supervision at the Autoriteit Persoonsgegevens, explained that organizations “must register their artificial intelligence, but then act as if they are ordinary algorithms. The rules for those are less strict” [1]. A concrete example involves OxRec, used by rehabilitation organizations to predict recidivism risk, which was registered as a standard algorithm despite actually being an AI system [4]. This regulatory gaming is particularly concerning as AI systems with high risk are used in healthcare and crime detection, and will face comprehensive requirements including technical documentation, risk management, and bias mitigation starting next year [1].
High-Risk AI Applications Escaping Proper Oversight
The recruitment sector represents a significant blind spot in AI regulation enforcement, with many organizations unaware that AI systems used for hiring, selection, promotion, or dismissal fall under the EU AI Act’s high-risk category [5][6]. These applications must comply with strict requirements including human oversight, data quality standards, transparency, and explainability by August 2026 [4][6]. However, AI systems in recruitment processes often fail to meet requirements for accuracy, non-discrimination, and explainability, leading to unfair disadvantages for some candidates [4]. The Autoriteit Persoonsgegevens warns that many Dutch AI applications lack transparency because registration is barely mandatory, and some organizations attempt to classify AI systems as ordinary algorithms to evade the AI Regulation [4].
Healthcare Sector Faces Growing AI Risks
The healthcare sector faces particular vulnerabilities from unregulated AI deployment, with the Autoriteit Persoonsgegevens explicitly citing it as an example of where system risks could affect large groups of people or vital societal functions [7]. The authority has documented concerning incidents involving AI chatbots providing incorrect medical advice, particularly dangerous for young people with mental health problems [7]. The report references a U.S. lawsuit where parents sued OpenAI after their teenager’s suicide, and notes that “the number of incidents of psychoses in relation to AI chatbots increased” [7]. ChatGPT was subsequently modified after criticism of its increased use for therapy, for which the chatbot is unsuitable, while OpenAI recently launched ChatGPT Health specifically for healthcare in the United States [7].
Urgent Government Action Required
Autoriteit Persoonsgegevens Chairman Aleid Wolfsen issued a stark warning linking current AI governance failures to the Netherlands’ childcare benefits scandal, stating: “Five years after the childcare benefits scandal, the lessons are clear, but the follow-up is lagging. This is mainly because strong rules for algorithms and AI and their enforcement are lacking” [1][3]. The authority demands that the new cabinet expedite AI regulation implementation by establishing Dutch implementing legislation, appointing supervisors, structuring funding for supervision, and clarifying rule applications [2][4]. Wolfsen emphasized the urgency: “Now that the pressure to embrace AI is increasing, we must protect fundamental rights. Anyone who wants to prevent a new scandal must act now” [3][4]. European regulations currently prevent the introduction of specialized AI health services in the Netherlands, making swift regulatory clarity essential for both innovation and protection [7].