EU Launches Consultation on AI Code of Practice
Europe, Wednesday, 31 July 2024.
The European Commission initiates a consultation to develop a Code of Practice for general-purpose AI providers. This move aims to establish ethical guidelines and best practices for responsible AI deployment, addressing key areas like transparency and risk management.
The Need for Ethical AI
The European Commission’s consultation on the Code of Practice for general-purpose AI (GPAI) models is a significant step towards ensuring that AI technologies are developed and deployed responsibly. As AI systems become increasingly integrated into daily life, the need for ethical guidelines to mitigate risks and promote transparency has never been more critical. The AI Act, which enters into force on 1 August 2024, provides a regulatory framework to classify AI systems based on their risk levels, ensuring that high-risk applications are closely monitored[1].
Key Areas of Focus
The Code of Practice will address several critical areas, including transparency, copyright-related rules, risk identification and assessment, risk mitigation, and internal risk management. Providers of GPAI models, along with businesses, civil society representatives, rights holders, and academic experts, are invited to submit their views to shape the upcoming draft of the Code[1]. This collaborative approach aims to create a comprehensive and balanced set of guidelines that reflect the diverse perspectives of stakeholders involved.
Implementation and Enforcement
The feedback from this consultation will also inform the work of the AI Office, which will oversee the implementation and enforcement of the AI Act rules on GPAI. The AI Office is responsible for developing a template and guidelines for summarizing training data used to build GPAI models. These guidelines will be adopted by the Commission and will play a crucial role in the ongoing discussions about the Code of Practice[2].
Benefits of the Code of Practice
The establishment of a Code of Practice for GPAI models will bring several benefits. Firstly, it will enhance transparency by requiring providers to disclose information about the data used to train their AI models. This will help build public trust in AI technologies and ensure that users are aware of how these systems operate. Secondly, the Code will promote ethical AI development by setting clear guidelines for risk identification and mitigation. This will help prevent the deployment of AI systems that pose unacceptable risks to individuals and society. Lastly, the Code will foster innovation by providing a clear regulatory framework that encourages responsible AI development[3].
Looking Ahead
The Commission aims to finalize the Code of Practice by April 2025. This timeline allows for extensive consultation and feedback, ensuring that the final guidelines are robust and well-informed. As the AI Act’s provisions on GPAI come into application 12 months after its entry into force, stakeholders have a clear roadmap for compliance. The phased implementation of the AI Act, starting with the ban on prohibited practices in February 2025 and extending to obligations on high-risk AI systems by August 2026, provides a structured approach to regulating AI technologies[4].
Conclusion
The European Commission’s consultation on the Code of Practice for GPAI models marks a proactive step towards responsible AI governance. By involving a wide range of stakeholders and focusing on transparency, risk management, and ethical considerations, the Commission aims to create a balanced and effective regulatory framework. This initiative not only addresses current challenges in AI deployment but also sets the stage for future advancements in AI technology that are both innovative and trustworthy.