Insights and Analysis

AI Act and GPSR – Bridging the gap between artificial intelligence and product safety in the EU

Image
Image

The AI Act and the GPSR represent a significant milestone in AI and product safety regulation. They both prioritize risk assessment and control, compelling economic operators to recognize and address hazards. Additionally, both sets of rules stress transparency, accountability, and communication as vital components of effective regulation. Despite focusing on different aspects of product safety and technology governance, they share many similarities. Companies fostering strong product safety cultures will be better equipped to navigate the challenges posed by the AI Act.

The recent adoption of the AI Act signifies a new era of regulation for Artificial Intelligence. While this might seem like uncharted territory for some, economic operators with a strong product safety and compliance culture have a distinct advantage. While the AI Act undoubtedly adds new complexities given the unique challenges the use of artificial intelligence presents, the regulatory philosophy and market approach remain firmly rooted in the principles laid out by the EU Lawmaker with respect to product safety, as recently enacted through the General Product Safety Regulation ("GPSR").1

The AI Act is a ground-breaking EU regulation only awaiting publication in the Official Journal which establishes the world's first-ever legal framework for artificial intelligence systems.2 This piece of European legislation aims at ensuring the responsible development and use of AI systems by setting clear rules for developers and users.

The GPSR is a key regulation aimed at modernising the EU product safety framework by addressing the new challenges posed to the European single market by the digitalisation of economies. Coming into force on 13 December 2024 and replacing  the General Product Safety Directive and the Food Imitating Product Directive, this regulation defines the safety framework applicable to all non-food products that are not regulated by specific European regulations.

At their very core, the AI Act and the GPSR share an overarching safety objective, being aligned in ensuring a high level of protection of health, safety and fundamental rights for individuals. That is not the only similarity between the two pieces of legislation though. Indeed, the two regulations stem from the same regulatory philosophy, which results in complementary policy-making approaches. This is confirmed by Recital 9 of the AI Act, which states that the latter "should apply across sectors and, in line with its New Legislative Framework approach, should be without prejudice to existing Union law, notably on […] product safety, to which this Regulation is complementary."

A focus on risk

The AI Act and the GPSR both adopt an approach based on risk assessment and control, with stricter requirements for both artificial intelligence systems and products presenting the highest risks. Under the GPSR, economic operators are required to conduct a thorough risk analysis – before placing a product on the market but also during its life span – to identify potential hazards associated with their products, considering factors such as design, composition, and foreseeable use.3 Similarly, the AI Act compels providers to perform a comprehensive assessment of the risks posed by their AI systems, including potential biases, inaccuracies, or adverse impacts on individuals or society.4

Moreover, both regulations require economic operators to implement appropriate corrective measures to reduce identified risks to an acceptable level. Indeed, art. 9(8) GPSR and art. 20 of the AI Act do follow the same approach – to the extent that in some sections the wording is identical: whenever a non-compliance is detected, the manufacturer/provider has a duty to immediately take the necessary corrective actions to bring that product/AI system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. In such a scenario, both pieces of legislation include an obligation to inform the relevant market surveillance authority as well as other economic operators concerned of, inter alia, the measures taken. These may involve modifying product designs, incorporating safety features, or implementing safeguards to mitigate potential harm caused by products or AI systems.

Moreover, both regulations mandate the careful maintenance of regularly updated records as a crucial requirement. The GPSR compels manufacturers to keep technical documentation demonstrating the conformity of products, including risk assessments.5 Similarly, the AI Act requires exhaustive documentation, covering not only the development and operation of AI systems but also their potential risks and performance.6

Transparency and communication

Another key area of alignment relates to transparency, accountability and communication obligations between economic operators and other subjects.

Firstly, both the GPSR and the AI Act refer to transparency as a fundamental principle. Under the GPSR, economic operators are obliged to provide consumers with clear and comprehensible information about the risks associated with their products.7 This includes instructions for safe use, warnings about potential hazards, and any necessary precautions. Similarly, the AI Act mandates transparency regarding the capabilities, limitations, and potential risks of AI systems. Providers of AI systems or their authorized representative must register themselves and their AI systems in a specific EU database as well as provide clear and accessible information to users, enabling them to make informed decisions about the technology's use.8 In some instances, e.g. GPAI,9 chatbots and deepfakes,10 humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. This parallels the GPSR's focus on empowering consumers with the knowledge needed to assess product safety.

Moreover, both the GPSR and the AI Act deem effective communication between economic operators and other stakeholders an essential feature of a proficient regulatory environment. Indeed, both in product safety and AI governance, the EU lawmakers deemed that communication fosters a shared understanding of risks, promotes a culture of continuous improvement and, ultimately, is an essential condition for achieving the goal of a safer single market. More in detail, under the GPSR, communication channels – such as the Safety Gate framework composed of the Rapid Alert System, the Portal and the Business Gateway, previously known as RAPEX – must be enhanced to enable swifter notifications of unsafe products and coordination of corrective actions.11 Similarly, the AI Act encourages dialogue between providers, users, and surveillance authorities to address emerging challenges and ensure the responsible development and deployment of AI systems.12

Next steps

While the GPSR and the AI Act may address different aspects of product safety and technology regulation, they share more commonalities than meets the eye. Indeed, if it is true that the AI Act represents a complex regulatory development, it is also true that economic operators with a strong product safety culture will be better positioned to overcome the challenges the AI Act will pose.

 

 

Authored by Christian Di Mauro and Guido Di Stefano.

References
Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC.
2 The latest version of the AI Act available at the time of writing – as approved by the Council – is available at the following link.
3 See art. 9(2) and (4) as well as art. 6 GPSR.
4 See art. 9(1) and (8) of the AI Act on the obligation to inter alia establish and maintain a risk management system in relation to high-risk AI systems which includes thorough testing before the AI system is placed on the market. For high-risk AI systems, appropriate measures should also be in place to detect, prevent and mitigate possible biases and inaccuracies in the data set – in compliance with data governance obligations set out in art. 10 of the AI Act.
5 See Recital 33 and 9 (2) GPSR.
6 See art. 11(1) AI Act.
7 See art. 9(7) GPSR.
8 Under Recital 47 an 48 of the AI Act, "Transparency, including the accompanying instructions for use, should assist deployers in the use of the system and support informed decision making by them." As to the EU database for providers and their AI systems, see art. 49 and 71 of the AI Act.
9  "General-purpose artificial intelligence" as defined by art. 3(6) of the AI Act.
10  See art. 50(4) of the AI Act.
11 See Recital 68 GPSR.
12 See Recital 142 of the AI Act, whereby "Member States are encouraged to support and promote research and development of AI solutions in support of socially and environmentally beneficial outcomes, such as AI-based solutions to increase accessibility for persons with disabilities, tackle socio-economic inequalities, or meet environmental targets, by allocating sufficient resources [to] projects which pursue such objectives. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics."

Search

Register now to receive personalized content and more!