Rotterdam's AI Use Raises Concerns Over Lack of Governance

Rotterdam's AI Use Raises Concerns Over Lack of Governance

2024-12-11 data

Rotterdam, Wednesday, 11 December 2024.
Rotterdam employs AI systems without clear public policies, sparking ethical concerns and prompting calls for transparent regulatory frameworks in public administration.

Historical Context and Current Concerns

Rotterdam’s journey with AI implementation has been marked by controversy. In 2017, the city deployed an AI system for welfare fraud detection [1], which was later suspended in 2021 following an external ethics review that identified concerning biases against specific demographics [1]. This history makes the current lack of governance guidelines particularly troubling, as a recent global study reveals that only 15.294% of local governments using AI systems have public-facing policies about their implementations [1].

Scope of AI Implementation

The scale of AI adoption in local governance is significant, with researchers identifying 262 cases across 170 local councils internationally [1]. These AI systems are being deployed across five crucial areas: administrative services, healthcare and wellbeing, transportation and urban planning, environmental management, and public safety [1]. Despite this widespread implementation, public awareness remains low, with a survey indicating that while 75% of people know about AI technologies, half are unaware of their local government’s use of AI in public services [1].

Public Awareness Gap

The transparency deficit is further highlighted by the fact that 68% of citizens are uncertain whether their local governments have policies governing AI use [1]. This knowledge gap is particularly concerning given the potential impact of AI systems on public service delivery. While some cities, such as Barcelona, have established comprehensive public AI policies emphasizing transparency, explainability, and fairness [1], Rotterdam’s approach lacks similar safeguards.

The Path Forward

The need for robust AI governance frameworks is becoming increasingly urgent. Without clear policies, there are risks of ethical violations, systemic biases, and unregulated data use [1]. Experts emphasize that establishing transparent guidelines is crucial to prevent potential harm from unregulated AI applications in governance [1]. Rotterdam’s experience serves as a cautionary tale, highlighting the importance of implementing comprehensive AI policies before, rather than after, problems emerge.

Bronnen


AI governance public transparency