2024-2025 Global AI Trends Guide
On Thursday, 11 May, the Committees of the European Parliament for the Internal Market and Civil Liberties voted for far-reaching amendments to the EU’s proposed artificial intelligence regulation (“AI Act”). The Committees’ proposals seek to further develop the framework for regulating the risks associated with AI that have dominated the political debate in this area for several years, while also addressing the emerging and important concerns associated with generative AI and other forms of general purpose AI (“foundation models”).
The proposed amendments also expand the scope of obligations for so-called “users” of AI systems – and establish “trustworthiness requirements” that shall, if implemented, apply to any AI system – irrespective of whether it is considered “high-risk” or not. Whilst the EU lawmakers deserve respect for their comprehensive and courageous amendment proposals, some of them may simply not go in the right direction. Supporting the competitiveness of EU research institutions and business requires careful consideration – as the EU cannot afford to fall back further in the global concert of fast paced AI innovation.
The revised draft AI Act as proposed by the relevant Committees, will now go to a plenary vote by the European Parliament which is scheduled to take place in June 2023 and is expected to be approved. Given the substance and scope of the proposed changes, intensive trilateral negotiations will then likely follow between the European Council, Commission and Parliament, who will need to reach agreement on the final version of the Act. It is possible that a successful resolution may be reached before the end of 2023 – so that the world’s first comprehensive piece of AI specific regulation can enter into force in early 2024.
The Parliament’s proposals introduce a number of amendments that are particularly controversial and that are likely to dominate the upcoming discussion, namely (1) new rules on foundation models and generative AI, (2) new trustworthiness requirements for all AI systems, (3) an expansion of obligations that apply to the users of AI systems, (4) changes to the scope of those AI systems that are labelled “high-risk” and (5) additions to the list of prohibited AI practices.
1. Foundation Models and Generative AI
The proposed amendments introduce new rules for foundation models – that is AI models trained on a very broad range of sources and large amounts of data - for a generality of different applications. Such foundation models typically serve as basis for a wide range of downstream tasks and can be made available for such specific, dependent applications through means of “open source” or application program interface (API). A particularly visible and indeed omnipresent form of foundation models are what is called “generative AI”, where the model is designed to generate content of any kind like texts, code, images, animations, videos or music.
The proposed provisions impose specific obligations on the providers of such foundation models to:
The proposed provisions impose additional and farther reaching transparency requirements for “generative” foundation models, such as
Exemptions to those obligations and requirements for foundation models in general and generative AI in particular are made for research activities and AI components provided under open-source licenses.
2. Challenges for the proposed rules for Foundation Models and Generative AI
Regulating Foundation models in general and for generative AI in particular is dearly needed and makes much sense. Yet, such regulation is anything but easy and here are some of the challenges that the attempt of the EU legislator may encounter in the discussion that will now ensue:
3. General AI Principles
Previous versions of the AI Act from the European Commission and Council have predominantly focused on introducing obligations in relation to ‘high-risk’ use cases of AI. However, the Parliament’s amendments propose to significantly widen the scope of the regulation, by introducing a set of general principles for the development and use of AI. These principles are intended to apply to all AI systems, irrespective of the risk that they pose. They will require organizations to exercise best efforts in developing and using AI systems in accordance with the following requirements:
4. Additional User Obligations
While providers of high-risk AI systems (i.e., developers) are subject to the primary obligations under the AI Act, the Parliament’s amendments also propose to broaden the range of requirements that apply to organizations that deploy these systems. These organizations have been referred to as ‘users’ in past versions of the AI Act, but are now referred to as ‘deployers’.
These additional requirements include, for example:
5. High-Risk AI Systems
The comprehensive regulation of the specific risks deriving from so called “high-risk AI systems” has been the main focus and objective of the AI Act. Whether or not a specific AI qualifies as “high-risk” depends on its specific sphere of application and each use-case is explicitly listed within the AI Act. AI systems used in fields such as medical devices, automotive vehicles, educational assessment, job recruitment, credit assessments, critical infrastructure and health insurance have already been identified as "high-risk" in previous drafts.
New to the list of these “high-risk AI systems” are those applications that aim to “influence voters in political campaigns.” This addition was clearly needed and makes sense under any perspective. Whilst AI (including all sorts of data analytical methods) will clearly be used in the context of elections, it worthwhile for these uses to be particularly closely scrutinized by means of regulation. At the moment, there seems to be hardly any field of AI application that is in more need for a useful regulation.
6. Prohibited AI Practices
Art. 5 of the previous draft AI Act provided (already before the latest amendment proposals for wide-ranging prohibitions) of many AI practices that were regarded to be overly intrusive, discriminatory or otherwise abusive. Those prohibitions included in particular any AI practice to (1) apply forms of “social scoring” (2) exploit personal vulnerabilities, (3) discriminate or otherwise unduly categorize people according to gender, race, age, etc. or (4) undertake real-time biometric identification in publicly accessible spaces.
These already existing prohibitions were the target of intense criticism by many human rights groups and subject to multiple efforts to stretch the legal protection against any form of intrusive AI practice. These efforts proved successful and the latest amendment proposal by the EU Parliament Committees are now significantly more restrictive. The list of additional or significantly extended prohibitions now also cover systems for:
It is to be expected that some of these amendment proposals will meet the fierce resistance of some of the EU member states – that will exercise their say through the EU Council in the upcoming trialogue negotiations. The questions that will arise are very delicate and difficult indeed: is it acceptable to use the power of AI to go through publicly available information to build new databases to help identifying prospective wrongdoers? Is a strong limitation of the retrospective analysis of public space footage sensible in the context of crime investigation? Surely, these and related questions will continue to stir intensive discussion – by legislature and in society at large.
We will continue to inform you about the progress of this and other AI related pieces of legislation.
Authored by Leopold von Gerlach and Dan Whitehead.