Amsterdam Pulls Welfare AI System After Bias Issues Despite Initial Promise
Amsterdam, Sunday, 1 March 2026.
Amsterdam’s ambitious AI system designed to reduce caseworker bias in welfare decisions was ultimately discontinued by June 2025 after audits revealed unforeseen problems. The system had initially shown promise by omitting sensitive information that could influence decision-making, representing a practical approach to ethical AI in government services. This setback highlights the broader European challenge of implementing AI literacy and regulation effectively, as experts warn that inadequate understanding among policymakers poses growing dangers to successful AI deployment in public administration.
The System’s Original Design and Promise
Amsterdam developed a welfare AI system specifically designed to mitigate caseworker bias by omitting sensitive variables that could influence decision-making [1]. The system, known as “Slimme Check,” was piloted as an AI tool for flagging social welfare applications, with the city taking the notable step of convening a citizen advisory panel before ultimately discontinuing the project [3]. This approach represented part of Amsterdam’s broader commitment to involving citizens in algorithm design through citizen panels, reflecting the city’s recognition that public participation is essential for successful AI implementation in government services [3].
Timeline of Implementation and Failure
The welfare AI system was ultimately pulled after audits revealed unforeseen issues by June 11, 2025 [1]. This timeline places the system’s discontinuation well into 2025, suggesting that the project had been running for a considerable period before the problems became apparent through formal auditing processes. The failure occurred despite Amsterdam’s established track record in digital governance innovation, including the development of OpenStad, an open-source participation platform launched in 2016 that was later adopted by The Hague in 2019 [3].
Broader Context of AI Challenges in Government
Amsterdam’s experience reflects wider European challenges with AI implementation in government services. The Netherlands has previously grappled with algorithmic bias in public administration, most notably in 2019 when a machine learning model incorrectly flagged over 30,000 childcare benefit cases [1]. Additionally, the Dutch System Risk Indication (SyRI) case involved an algorithmic risk-scoring system used by the Dutch Ministry of Social Affairs and Employment to combat welfare fraud, which was later disbanded due to violating privacy and data protection principles [3]. These cases underscore the persistent challenges of implementing fair and effective AI systems in welfare administration.
Regulatory Framework and AI Literacy Concerns
The Amsterdam case emerges amid growing concerns about AI literacy across Europe, where less than 50% of the EU population possesses basic digital skills, while at least 20% of EU firms use AI [1]. European policymakers are currently crafting changes to scale back and simplify rules for AI and data privacy [2], even as experts warn that Brussels is failing to address the growing dangers of inadequate AI understanding among policymakers and citizens. The EU’s Artificial Intelligence Act, which was enforced starting August 1, 2024, imposes requirements on AI system providers and deployers, including public authorities, with high-risk AI systems used in welfare eligibility subject to transparency, traceability, and human oversight requirements [3]. However, the Amsterdam case demonstrates that regulatory compliance alone may not be sufficient to ensure successful AI deployment in sensitive government applications.