The process of developing the EU's
first Code of Conduct has officially begun, which will detail
the rules of the EU's AI ;;Act for providers of general purpose
artificial intelligence (GPAI) models, including those with
systemic risks.
The kick-off plenary session convened by the European AI Office
was held, attended by almost a thousand participants, including
providers of general purpose AI models, downstream providers,
representatives of industry, civil society, academia and
independent experts.
The online meeting, of an operational nature, was open only to
interested parties who registered by 25 August 2024.
Before their publication in the autumn, the AI ;;Office will
also present the first results of the stakeholder consultation
on the Code of Conduct, on which almost 430 contributions were
received. The Code of Conduct aims to facilitate the correct
application of the rules set out in the AI ;;Act in relation to
general-purpose AI models, including those on transparency and
copyright, the systemic risk taxonomy, risk assessment and
mitigation measures.
The process of developing the Code of Conduct involves four
working groups meeting three times to discuss drafts. This
process will be led by chairs and vice-chairs, including Marta
Ziosi, a post-doctoral researcher at the Oxford Martin AI
Governance Initiative, and Daniel Privitera, founder and
executive director of the Kira Center, an independent non-profit
organization based in Berlin that deals with AI. The two
Italians will be vice-chairs of the working group 'risk
identification and assessment' and the 'technical risk
mitigation' respectively. The final version of the Code of
Conduct will be published and presented in a closing plenary,
scheduled for April 2025.
photo: OpenAI CEO Sam Altman
ALL RIGHTS RESERVED © Copyright ANSA