News

Digital Transformation Notes | May 2024

Image
Image

Global cooperation on AI safety has clearly been the topic of the month – with the Seoul Declaration for Safe, Innovative, and Inclusive AI, as well as the OECD’s publication on universal concepts of AI and cybersecurity. The European Union’s creation of a new AI Office further contributed to this trend of coordinating approaches across country lines.

The AI Seoul Summit 2024 

On 21 May, the Republic of Korea and the UK co-hosted the AI Seoul Summit 2024. The event, the second in a series launched last year, brought together leaders representing ten governments and the EU to sign the Seoul Declaration for Safe, Innovative, and Inclusive AI. Building on the work of last year’s UK AI Safety Summit, summarized in the so-called Bletchley Declaration, government representatives from Australia, Canada, the EU, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the UK, and the U.S. gathered in Seoul to cement their commitment to intensifying international cooperation on AI regulation. They intend to create, step-by-step and over time, an ever more consistent approach towards AI regulation.

The Seoul Declaration anchors AI governance and corresponding frameworks in a risk-based approach – so as to maximize the potential benefits and mitigate the remaining risks of AI applications of all kinds. It places a strong emphasis on the need for enhanced international cooperation to advance AI safety, innovation, and inclusivity, to protect and promote democratic values, the rule of law, human rights, and fundamental freedoms, and, above all, to bridge gaps in AI regulation across jurisdictions.

The OECD AI Papers

Also this month, the OECD published a report streamlining key definitions in AI and cybersecurity. The underlying rationale is to create a common vocabulary and understanding of all relevant AI and cybersecurity concepts for the purpose of a coherent use in regulation and legislation around the world. The report is to be read as offering conceptual templates of AI and cybersecurity terminology for future legislation to build on. It provides detailed definitions of AI incidents and AI hazards based on the concepts of actual and potential harm. It also outlines useful criteria for how to classify an AI hazard or incident depending on how serious it is. In short, the report outlines a very targeted approach to harmonize legal concepts in the sphere of AI and cybersecurity.

The AI Act and the Establishment of an AI Office of the European Union

On 21 May, the Council of the EU gave the greenlight to the AI Act. Following this final approval, the definitive text of the AI Act will be shortly published and will enter into force twenty days thereafter. The first rules of the new AI Act that will become mandatory are those on so-called prohibited applications as they are regarded to establish an unacceptable risk – such as AI-based social scoring as well as certain forms of facial libraries and real time biometric surveillance of people in public places.

Another immediate consequence of the new AI Act is the establishment of an EU AI Office that has just been set up as announced by the EU Commission. The AI Office will have to perform most of the practical tasks deriving from the AI Act. In line with these tasks, this new office is divided into the following five units:

  • Regulation & Compliance – facilitating a uniform application of the AI Act in EU member states;
  • AI Safety – focusing on identifying and considering mitigation strategies for systemic AI risks;
  • AI & Robotics – stimulating and supporting the development of intelligent systems and their integration in the newly emerging AI ecosystem;
  • AI for the Societal Good – evaluating the use of AI for pro-social activities and civic tasks; and,
  • Innovation & Policy Coordination – overseeing the execution as well as the effects of the AI Act, and devising potential improvements going forward.

For a comprehensive analysis of the AI Act, delve into our series, The EU AI Act: an impact analysis, parts 1 and 2.

The Italian AI Bill

Almost simultaneously with the EU AI Act's approval, the Italian Council of Ministers adopted its own national Italian AI Draft Bill. Whilst national legislators of EU member states are not prevented from issuing additional AI regulation, it is fair to say that this is not what the European legislator wanted to encourage. Rather, the EU’s AI Act was meant to provide a uniform and conclusive regulatory framework that would not necessitate further rules on a national level, as such regulatory fragmentation is typically seen as an impediment for the successful development and commercialization of AI systems. Should this Italian AI Bill be turned into law, it will apply alongside and in addition to the AI Act for AI-related developments and deployments in Italy.

The Italian draft bill does try to remain aligned with the principles of the AI Act, including on transparency, proportionality, protection of personal data, accuracy, confidentiality, non-discrimination, and system security. It regulates the research, experimentation, development, adoption, and application of AI systems and models on national territory.

For a more detailed take on the Italian AI draft bill, enjoy our article, Leaked Italian AI draft bill reveals national push to anticipate the AI Act.

Next steps

Subscribe to the newsletter here

 

Authored byLeo von Gerlach and Julio Carvalho.

left_arrow
right_arrow

Search

Register now to receive personalized content and more!