News

Digital Transformation Notes | February 2024

Image
Image

Just over a month into the new year, 2024 has already been packed with legislative activities in the field of digital transformation and, in particular, on AI. 

 

 

In the U.S., the White House published a list of key actions to be taken in light of the recent Executive Order on artificial intelligence. Among others, these actions include a request to report information on vital AI safety issues, as well as a similar request to inform about the use of cloud services for AI training purposes by foreign organizations.

On an international level, attempts to conclude the world’s first International Treaty on AI (by the Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law) have been dominated by discussions on whether or not private organizations (in addition to public ones) should be covered by any of the Treaty’s obligations, with the U.S., the UK, and other potential joiners taking a very sceptical view on this issue.

Yet none of these developments have been as captivating as the latest iterations of the long-awaited Artificial Intelligence Act (“AI Act”).

Following the hard-fought political agreement reached in December 2023 on all core aspects of the AI Act of the European Union (EU), the Belgian presidency of the Council of Ministers presented a close-to-final text of this European regulation. Despite very little time to scrutinize this (almost) final version – and some persisting fears that France, Germany or Italy could withhold their approval due to the more aggressive proliferation of powerful AI models in their own respective countries – on Friday 2 February, the ambassadors of all 27 EU member states gave their unanimous consent to the world’s first comprehensive rulebook for artificial intelligence. Whilst there will still be meetings by the Committees on Civil Liberties, Justice and Home Affairs and Internal Market in mid-February, and then yet another vote by the EU Council and Parliament at the end of February, it is now certain that the EU’s AI Act is coming and we have some clarity on the details of what it will look like.

Following its publication in the official journal in the European Union around April, the AI Act will come into force 20 days later, meaning this will be done before the summer break. Some provisions of the AI Act, and in particular those on prohibited AI applications, will come into force six months later, towards the end of 2024. The important provisions on general purpose AI (or foundation models), will come in effect by mid-2025. Most remaining provisions will then be effective as of mid-2026.

Due to the AI Act’s extremely broad scope, both in terms of territory as well as AI systems covered, it directly affects an extremely broad array of businesses. Regarding the territory, the AI Act provides for an extraterritorial application, such as to providers in countries outside the EU that place AI systems or general-purpose AI models on the market within the EU, or those that are physically running AI systems in the EU. It further applies to providers or deployers of AI systems outside the EU where the output produced by the AI system is used within the EU. To the same extent, the definition of an AI system is both very broad and also in line with the OECD’s definition. According to this definition, all systems included are (1) machine based, (2) designed to operate with varying levels of autonomy, (3) have some adaptiveness and (4) infer how to generate output from input that it has received. It is by no means certain that this broad wording would actually require any self- or machine learning capabilities as the system only “may” need to be adaptive.

Arguably, the AI Act’s most important concept is its risk-based approach with four different levels of risk. Such unacceptable risks relate to, among others, any AI systems that (1) deploy manipulative, or deceptive techniques, (2) result in a social scoring, (3) compile facial recognition databases, or (4) infer emotions in the workplace or educational institutions.

For so-called high-risk AI systems, the AI Act establishes detailed and comprehensive obligations that predominantly apply to providers and deployers of such systems. These obligations address a wide range of AI governance measures and technical interventions that need to be put in place during the various stages of the system’s implementation process, and be monitored and maintained throughout the AI system's lifecycle. This includes transparency, risk management, accountability, data governance, human oversight, accuracy, robustness, and cybersecurity. There are two main types of such high-risk systems: those that are considered safety components or products subject to existing safety standards, and those that serve a particular high-risk purpose for any product. This would include, for example,  (1) certain biometrics, (2) education and vocational training, including systems to determine access or admission, evaluate learning outcomes, (3) systems for work related recruitment, selection, monitoring, termination or promotion; as well as (4) access to and enjoyment of essential public and private services, including credit scoring, and pricing in health and life insurance.

The risk-based classification of AI systems is supplemented by rules for general purpose AI, which had been introduced into the text last year in response to the rise of foundation models such as large language models ("LLMs"). The specific obligations for these models can be grouped into four categories with additional obligations for such general purpose AI models that entail systemic risks: (1) drawing up and keeping up to date technical documentation of the model, including training and testing process and evaluation results; (2) providing transparency to downstream system providers looking to integrate the model into their own AI system; (3) putting in place a policy for compliance with copyright law and (4) publishing a detailed summary of the training data used in the model’s development.

Providers of general purpose AI, entailing a systemic risk (which will be assessed based on technical thresholds indicating high impact capabilities of models that will undergo further development in the future) must also (1) perform model evaluations; (2) implement risk assessment and mitigation measures; (3) maintain incident response and reporting procedures, and (4) ensure adequate level of cybersecurity protection. The EU Commission may adopt delegated acts to amend the thresholds.

For an in-depth review of the latest text of the AI Act, take a look at our more comprehensive articles on HL Engage, by Martin Pflueger, Stefan Schuppert, Eduardo Ustaran, Nicole Saurin, David Bamberg, Dan Whitehead, and Jasper Siems.

 

Please click the download button to read more.

 

Authored by Leo von Gerlach.

left_arrow
right_arrow

Search

Register now to receive personalized content and more!