2024-2025 Global AI Trends Guide
Manufacturers and other stakeholders contemplating the use of AI-enabled products and services should be mindful of new rules proposed by the European Commission (EC) to address product liability claims related to AI systems. With the aim of bringing product liability rules into the digital age, the proposal includes new disclosure obligations as well as a new presumption of causal link for AI systems and is intended to balance the interests of industry and consumers.
Specific rules would apply to “high-risk” AI systems – i.e., AI systems that pose significant risks to the health and safety or fundamental rights of persons, including certain medical devices and in vitro diagnostic medical devices.
On Thursday, 18 May, Hogan Lovells is hosting its annual Health Care AI Law and Policy Summit, an informative and interactive program where our thought leaders and industry guests will address a variety of topics including new and emerging health care AI policies and regulatory considerations, implications for ethics and consumer safety, developments in the U.S., UK, and EU, and more. You can register for the Summit online here.
_______________________________________________________________
To adapt the European product liability landscape to the digital age, in September 2022, the European Commission proposed new rules to address liability claims relating to AI systems, including a first of its kind AI Liability Directive (AILD Proposal), targeting harmonization of the member states’ national fault-based civil liability rules for AI-enabled products and services.
The AILD Proposal addresses liability claims related to AI systems. According to the EC, the major difficulty with damage claims related to AI is the burden of proof. As such, with the aim of providing consumers and other persons seeking compensation for damage allegedly caused by high-risk AI systems with effective means to identify potentially liable persons and relevant evidence for a claim, under the proposal, claimants would be granted the right to request evidence disclosure both before and in the course of court disputes. Failure to comply with an order to disclose evidence will lead to a presumption of non-compliance with “a relevant duty of care” that the evidence requested was intended to prove and leaves it to the defendant to rebut this presumption. We have described the mechanics of these proposals in more detail here.
The EC has also expressed the view that the technical features of AI (amongst others, their opacity, autonomous behavior, and complexity) make it difficult for injured persons to meet their burden of proof and obtain compensation of damages allegedly caused by AI systems. For this reason, the AILD Proposal also introduces specific tools which aim to make it easier for claimants to substantiate claims for damages caused by interaction with AI systems, through the use of a disclosure of evidence mechanism and of rebuttable presumptions. For high-risk AI systems, the proposed rules also distinguish between claims raised against the provider of a high-risk AI system and claims raised against users of the AI system, as we have detailed here.
The AILD Proposal is at an early stage of legislative work before being enacted, possibly in 2024. Then, the new rules will need to be transposed into the member states’ national product liability systems. Business operating in the EU life sciences and health care sector should continue to monitor this changing landscape carefully.
Please contact the author or the Hogan Lovells attorneys with whom you regularly work for guidance on your specific product needs.
Authored by Nicole Saurin.