News

The AI Act is coming: EU reaches political agreement on comprehensive regulation of artificial intelligence

Image
Image

On 8 December 2023, after marathon “trilogue” negotiations, the Council of the EU, the European Parliament and European Commission reached a groundbreaking agreement on the forthcoming AI Act. Although the final text is still subject to legal and linguistic revisions, and has not yet been published, it is undeniable that the compromise that has been reached will pave the way for a model of comprehensive regulation of artificial intelligence (AI) that is set to have a global effect.

The legislative process

The legislative process started on 21 April 2021 with the publication by the European Commission of its AI Act proposal. The Council adopted the negotiating mandate on the proposal on 25 November 2022 and the Parliament confirmed its position in a plenary vote on 14 June 2023.

Following the rotation of the Presidency of the Council to Spain in July, the Council announced that the AI Act would be given priority with the aim of reaching a final text by the end of the year. However, negotiations appeared to hit the brakes on 10 November 2023. On that day, Germany, France and Italy strongly opposed to the Parliament’s strict approach to the regulation of foundation models and the future of the AI Act became uncertain. The Council favoured a lighter approach of mandatory self-regulation through codes of conduct, which would have allowed developers of foundation models to adopt a voluntary compliance mechanism.

To overcome the resulting impasse, the Parliament and the Council settled on a tiered approach introducing a stricter regime for "high-impact" foundation models and general purpose AI systems, that eventually paved the way for a final compromise. The press release of the European Council can be found here and the press release of the European Parliament can be found here.

Foundation Models

The regulation of so called foundation models, (i.e., general purpose AI systems, trained on large amounts of data to perform different tasks such as generating text, computer code, video, images or conversing in lateral language) has attracted much legislative attention and controversy over recent months. Under the compromise now reached, the AI Act will make a distinction between those rules and obligations that apply to all foundation models and those additional ones that apply to particularly powerful, “systemic” foundation models.

Providers of all foundations models – with the exception of those that are “open source” – are expected to be required to comply with certain transparency obligations including: (1) up-to-date technical documentation explaining, in particular the model's performance and its limitations, (2) an acceptable use-policy, (3) a detailed description of the content used for training.

Additionally, measures will need to be taken to ensure that EU copyright law has been adhered to. This applies particularly to data mining activities, where opt-out requests by content owners (e.g., media organisations) will need to be honoured.

Another requirement includes the publication of a sufficiently detailed summary of the data used to develop the foundation model. This is likely to become a point of contention for many providers, as it will be a particularly challenging obligation to comply with retrospectively.

Special rules will also apply to particularly powerful foundation models, which will be defined by the computing power used for its pre-training (10 septillion operations per second – which only the very most advanced models reach). Providers of these models will need to perform ongoing risk evaluation, issue reports about their systemic risks and identify serious incidents. Codes of practice will be developed in order to operationalise these standards, which will act as an interim measure until such time that officially harmonised standards for foundation model development and deployment are implemented.

Importantly, the European Commission will also be in a position to identify certain models as meeting the relevant criteria to be considered within scope of these special rules.

High-risk, limited-risk and low-risk AI systems

Looking more broadly at the entire AI Act, it pursues an all-encompassing, but risk-based approach. The greater the risk of the AI application, the more far-reaching the applicable obligations.

The AI Act focuses on high-risk AI systems, for which it sets out a number of requirements. High-risk AI systems are specifically designated as such by the regulation and include AI systems used in certain products such as machinery, medical devices, vehicles, radio and pressure equipment. Equally, other use-cases that are considered particularly sensitive are also included in fields such as biometric identification, critical infrastructure, employment and education.

Developers and organisations that use or operate high-risk AI systems are the two main parties subject to the AI Act. Obligations are primarily placed on the developers (“providers”). They are required to implement a wide range of technical and governance measures before the AI system is sold, licensed, or used in a production environment. Examples of such measures include:

  • risk management framework;
  • conducting conformity assessment;
  • adopting comprehensive data governance standards;
  • develop functionality that facilitates explanation of decisions; and
  • establish appropriate governance measures.

Furthermore, High-risk AI systems must be designed and developed to achieve accuracy, robustness, cybersecurity, sustainability and longevity.

For the users of high-risk AI systems, the obligations include

  • using the AI system only in accordance with the technical documentation provided by the provider;
  • monitoring the operation of the system on an ongoing basis;
  • conducting a mandatory fundamental rights impact assessment prior to putting an AI system into use; and
  • reporting any malfunctions detected to the providers.

Limited-risk AI systems must comply with transparency obligations, in particular informing natural persons that they are interacting with an AI system or disclosing that content has been generated by an artificial intelligence. The AI Act will likely not impose any requirements on low-risk AI systems. Instead, it encourages providers to develop voluntary codes of conduct.

Prohibited AI systems

There are a limited number of AI use cases that are considered to pose an unacceptable level of risk to EU citizens and are, therefore, prohibited. These will include, for instance, AI systems that:

  • involve the untargeted scraping of facial images from the internet to create facial recognition databases;
  • emotion recognition systems used in the workplace;
  • Use subliminal techniques beyond the level of human consciousness;
  • exploit the vulnerabilities of certain individuals due to their age, physical or mental disabilities in order to significantly distort their behavior; and
  • involve social scoring systems to evaluate personal characteristics.

Enforcement powers

The EU institutions agreed that a range of maximum fines will apply for infringements of the AI Act. These range from €35m or 7% global turnover (which is higher than the GDPR) through to €7.5m or 1.5% global turnover, depending on the size of the company and the specific infringement of the AI Act.

Next steps

The political agreement of the EU institutions must now be translated into a final legislative text. Technical trilogue meetings are planned for 11 and 13 December 2023, during which the details will be worked through. This will be followed by a final linguistic review. It is expected that this process could take between one to three months.

Following this, the AI Act will be formally adopted and published. Once adopted, organisations will then have two years to prepare for compliance before the regulation becomes enforceable. However, with regard to prohibited AI systems, such prohibitions will become enforceable after six months.

Crucially, the AI Act is set to become a model for global AI governance in a similar way that the GDPR has become a model for data protection regulations around the world. However, the complexities of the AI Act and the legal novelties that it introduces are likely to create substantial organisational challenges for providers and users of AI systems. It will therefore become imperative to devote significant resources and efforts to understand the practical effect of the new framework in order to be ready for compliance when the time comes.

Authored by Stefan Schuppert, Eduardo Ustaran, Nicole Saurin, Jasper Siems, Dan Whitehead, and Sebastian Faust.

Search

Register now to receive personalized content and more!