News

The AI Act – Beyond high-level overviews: a temporal loophole

Image
Image

The long-awaited AI Act has recently been codified under Regulation (EU) 2024/1689. However, its provisions will be introduced gradually for operators involved in the artificial intelligence sector. This article provides an overview of the phased approach adopted by the AI Act for the implementation of its obligations, along with a critical analysis of potential loopholes concerning high-risk and limited-risk AI systems.

As the first initiative of its kind, the AI Act, now codified under Regulation (EU) 2024/1689, is commendable for its goal of regulating AI while fostering innovation and ensuring trustworthy, human-centered technology. However, given the legal novelty and technical complexity of the subject, some provisions of the AI Act and their rationale may not be as straightforward as a first reading might suggest. All the contrary, through its thirteen chapters, the AI Act provides much food for thought. The focus of this article are the open questions raised by the phased approach for full applicability of the AI Act and, particularly, its exemptions.

Let's start with the basics: Articles 111 and 113 of the AI Act

The AI Act entered into force on 1 August 2024 and will apply from 2 August 2026. Except that it gets more complicated than that, as the AI Act introduces both a phased approach within such a timeframe as well as specific exemptions, including the following:

  • General provisions (scope, definitions, AI literacy) and provisions related to prohibited practices (e.g. general purpose social scoring, emotion recognition in the workplace, etc.) will apply from 2 February 2025;
  • For AI systems that are components of certain large-scale IT systems1 placed on the market or put into service before 2 August 2027, obligations will apply from 31 December 2030;
  • Certain provisions related to high-risk AI systems classification requirements will apply from 2 August 2027;
  • GPAI models-related obligations, provisions on the AI Act's governance structure and penalties will apply from 2 August 2025. However, for GPAI models placed on the market before 2 August 2025, relevant obligations for providers will apply from 2 August 2027;
  • For high-risk AI systems placed on the market or put into service before 2 August 2026 relevant obligations will apply only if, as from that date, those systems are subject to "significant changes" in their designs or intended purpose.

When theory meets practice: the regulatory loophole of older high-risk AI systems

Although high-risk AI systems are given particular attention due to their nature, second only to outright prohibited practices, the current provisions seem to create a loophole that seems not to entirely align with the rationale of the AI Act.

Indeed, differently from general legislative practices in the field of product safety, the AI Act lacks a delayed (but set) deadline to exclude non-compliant products from the market, but rather introduces a cut-off date for its applicability. However, with regard to high-risk AI systems "already placed on the market" or put into service at the date of 2 August 2026, according to Art. 111 (paragraph 2) of the AI Act, the latter will not apply unless "significant changes" in the design of such systems are made afterwards.  By introducing this limited exception, the AI Act leaves the door open to potentially non-compliant high-risk AI systems,2 unless the obscure requirement of "significant changes" being made after the cut-off date is met.

In this respect, a key challenge will be that of defining "significant changes" as the current generic clarifications interpret this term as unforeseen or unplanned design-related changes made to an AI system after it is on the market, which must affect the system's logic, algorithm choices, or key design decisions.3 While the rapid pace of technological development could ideally act as a natural selection for non-compliant high-risk AI systems to quickly be dismissed (or, alternatively, necessarily undergo such “significant changes” that would make necessary to bring them into compliance), it may be argued that most of such systems are designed for specific purposes and won't undergo significant design changes, even if their methods evolve. On the other hand, and seeing the glass half full, the vagueness of "significant changes" might be the perfect tool for adopting a more rigorous approach, as it allows for broader interpretations.

As we try to determine which of these readings will prevail over the others, the rationale behind the approach adopted for high-risk AI systems is not clear, especially when compared to the different approach followed for GPAI models under the same Art. 111 (paragraph 3). While for GPAI models the date of their placing on the market only matters to identify the year (2025 or 2027) on which relevant obligations should apply, the same does not go for high-risk AI systems – leading to wonder whether this framework creates an unfair market advantage (or, even worst, on a lack of protection within the AI Act’s acquis). In this context, the alleged goal of avoiding market disruption4 or preventing a potential overflow of court cases5 is not entirely convincing, particularly against the overall aim of Regulation (EU) 2024/1689.

Limited risk AI systems: long forgotten

While the compliance expected for GPAI models and high-risk AI systems differs in several respects, at least Article 111 provides for express rules. In contrast, neither that provision, nor Article 113 expressly considers limited risk AI systems,6 leaving room for much uncertainty.

Indeed, both providers and deployers of limited-risk AI systems are required to comply with the transparency requirements of Article 50, but when such systems are placed on the market before August 2026 no grace period (as for GPAI models) nor express exclusion (as for high-risk AI systems) is provided. Furthermore, as previously mentioned, the AI Act does not align with general product safety regulations by prohibiting non-compliant AI systems from being (or continuing to be) marketed after a certain date. This brings up a question: when should operators of existing limited risk AI systems start worrying about compliance with applicable rules?

For some, the wording (or lack thereof) of Articles 111 and 113 may suggest that, from 2 August 2026, the AI Act will cover all existing limited risk AI systems, imposing transparency obligations on both providers and deployers of, for instance, an AI system generating or manipulating content previously placed on the market. However, others may suggest that both a legal and literal interpretation point towards the opposite direction, since the omission of limited risk AI systems within the specific regime set forth under Article 111 (as opposed to the general rule of Article 113) seems to exclude those placed on the market or put into use before 2 August 2026 from the scope of the AI Act.

Despite being anchored to the letter of Article 111, this is still particularly puzzling considering the two year frame available to providers to comply.

In practical terms, excluding limited-risk AI systems placed on the market or put into use before 2 August 2026 would mean, for instance, that natural persons will not be informed that they are interacting with an AI system - or, better, that deployers would not be under a binding obligation to do so. The rationale behind such an exclusion is not clear nor shareable, especially given the lack of any exceptions that could potentially trigger the AI Act's applicability to such systems.

At the same time, given the overall scope of the AI Act, one wonders whether such omission was deliberate. Afterall, the newly introduced Framework Convention on Artificial Intelligence7 finalized by the Council of Europe and signed by (nothing less than) the EU Commission boldly requires the signing parties to "seek to ensure that, as appropriate for the context, persons interacting with artificial intelligence systems are notified that they are interacting with such systems rather than with a human", thus vaguely recalling the transparency obligations set forth in the AI Act for limited risk AI systems – possibly in an attempt not only to anticipate them, but also to extend them.

While an unequivocal response will have to wait for the EU Commission's guidelines, a practical approach would nevertheless suggest distinguishing between the obligations provided for providers and deployers. While the "placing on the market" of the limited risk AI system considered shall guide the position of providers and their obligations, it would be rational and consistent with the AI Act's scope at least for deployers to meet their transparency obligations starting from 2 August 2026 – including for limited-risk AI systems already on the market or put into use (such as by disclosing deepfakes or a generated text).

The road ahead: high hopes for future guidelines

In the context described above, the application of different rules – such as in the field of AI liability, where a Directive is currently under discussion, also in light of the AI Act – may serve as means to regulate older AI systems (both high and limited risk) and ensure a safer digital environment. In this context, high hopes are placed in the guidelines the EU Commission is expected to adopt as well as in a wished-for update of the Blue Guide on the implementation of EU product rules. In the meantime, doubts remain.

 

 

Authored by Massimiliano Masnada, Giulia Mariuz, Ambra Pacitti, and Anna Albanese.

References
1 Specifically, those established by the legal acts listed under Annex X of the AI Act (e.g. those related to the Visa Information System).
2  Only those AI systems developed and placed on the market or put into use prior to the AI Act’s enactment.
3  Recital (177) of the AI Act links, in fact, the concept of "significant change" to be the notion of "substantial modification", defined under Article 3(23) of the AI Act.
4  Recital (177) of the AI Act clarifies the EU legislator’s decision is aimed at ensuring legal certainty, an appropriate adaptation period for operators and avoiding disruption to the market, including by ensuring continuity of the use of AI systems.
5  As pointed out by some commentators.
6  Such as chatbots or AI systems generating synthetic audio, image, video or text content.
7  Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

Search

Register now to receive personalized content and more!