Hogan Lovells 2024 Election Impact and Congressional Outlook Report
15 November 2024
In this article, we share our key takeaways from our recent webinar titled "AI, Digital Health & European Health Data Space", which was part of our "AI, Big Data and Law, Digital Deep Dive" webinar series. The webinar delved into the latest trends in artificial intelligence and digital health, highlighting their transformative potential in the health care industry. We explored the legal challenges facing the life sciences industry in the context of digital health and AI, providing insights into compliance, liability, and regulatory considerations. In this context, we paid particular attention to upcoming EU legislation. Finally, we discussed practical strategies for organizations to prepare for the evolving legal landscape of AI and digital health, providing actionable takeaways.
AI & Digital Health Trends:
Firstly our team delved into the latest trends in artificial intelligence and digital health, highlighting their transformative potential in the health care industry. AI could help to reduce inefficiencies and costs and improve access to and increase quality of health care.
Areas in which AI is and will increasingly be used in health care include mobile health, health information technology, wearable devices, telehealth and telemedicine, and personalized medicine. It creates new methods for diagnosis and disease detection and is used for mobile medical treatment, e.g. by analysing data of patient's wearable devices and detecting pathological deviations from physiological states. Moreover, personalized medical products are being developed using AI-generated health data of patients, including their medical history. In the future, AI could also benefit the medical decision-making process. The selection of adequate treatments and medical operations for specific individuals will based on the basis previous patient data indicating potential benefits and risks.
AI can also be utilized in various stages in the lifecycle of a medical product itself from drug discovery, non-clinical development, clinical trials (in particular in the form of data analysis) to manufacturing.
Legal Challenges for Life Sciences Industry:
Going forward, our speakers explored the unique legal challenges facing the life sciences industry in the context of digital health and AI, providing insights into compliance, liability, and regulatory considerations.
The current legal framework does not always consider the specificities of AI. Even in the context of health care, there are no specific regulations for learning AI software yet. Therefore, the general provisions of the Medical Device Regulation ("MDR") apply to software as a "medical device" (Art. 2 Para 1 MDR) or "accessory for a medical device" (Art. 2 Para 2 MDR), making the placing on the market of AI-based medical devices subject to a CE marking obligation (Art. 20 MDR) and a corresponding conformity assessment procedure (Art. 52 MDR). In addition, medical devices incorporating programmable electronic systems, including software, or devices in the form of software shall, according to Annex I, Section 17.1 MDR must be designed to ensure repeatability, reliability and performance in accordance with their intended use. So two worlds collide when self-learning dynamic AI and the requirements for medical device manufacturing meet: According to the MDR, software must be designed to ensure repeatability. While for “locked” algorithms that is not a problem, they provide the same result each time the same input is applied to it. However, continuously learning and adaptive algorithms, especially software based on a "black box" model are by definition not supposed to deliver repeatability. The particular benefit of AI for the health of patients both individually and in general is precisely its ability to learn from new data, adapt, improve its performance and to generate different results. This is why specific regulations for AI medical devices are needed.
At the EU level, there are several ongoing legislative processes to adapt the current legislative landscape to Europe's digital future, particularly in light of the proliferation of AI applications. Of particular note are the EU Data Strategy and AI Strategy.
The EU Data Strategy comprises data protection laws and data governance legislation, such as the EU Data Governance Act, the Proposal for an EU Data Act, and sectoral legislation to develop common European data spaces, such as the proposal for the European Health Data Space Act ("EHDS"). The purpose of the EHDS is generally twofold. It aims to empower individuals to have control over their electronic-health data and health care professionals to have access to relevant health data (primary use), and to facilitate access to anonymized or pseudonymized electronic health data for researchers, innovators and other data users for secondary use purposes. With regard to secondary use, the EHDS provides derogations on the basis of Article 9(2) lit. g), h), i) and j) of the EU General Data Protection Regulation (“GDPR”) for sharing, collecting and further processing special categories of personal data by data holders and data users. However, even with the EHDS in place data protection challenges will remain when it comes to utilizing health data, e.g. study data collected in clinical trials or usage data generated in course of the use of e-health applications, for secondary purposes. These challenges include ensuring compliance with the transparency requirements under Art. 13, 14 GDPR, 'change of purpose' requirements under Art. 6(4) GDPR and the right to object to the use of data according to Art. 21(1) GDPR.
In the context of the EU AI Strategy, a Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts ("draft AI Act") has been put forward.
The draft AI Act aims to promote "trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market." It takes a risk-based approach, setting out graduated requirements for AI Systems: According to the draft AI Act, AI systems posing an "unacceptable risk" are prohibited, "high-risk" AI systems are subject to increased requirements, while only non-binding specifications apply for AI systems with low-risk. However, it does not contain specific liability provisions.
The draft AI Act may become relevant in the context of health care, as according to the Commission's proposal, nearly almost any AI-based medical device will be classified as a high-risk AI system (Art. 6 para 1 in conjunction with Annex II, Section A, no 11 and no.12 draft AI Act) and Class II and Class III medical devices will automatically be considered high-risk AI systems. In the case of AI-based medical devices, the conformity assessment required by the MDR is complemented by the requirements of the draft AI Act (see Art. 43 Para 3 and 4 draft AI Act). However, the classification of AI based medical devices as high-risk AI systems may be subject to change in the course of the EU legislative procedure regarding the draft AI Act. Amendments proposed by the European Parliament include restricting the definition of "high-risk" AI systems to those systems that pose a "significant risk", e.g. AI systems that could endanger human health. Instead, the Parliament's position on the draft AI Act includes extended requirements for AI systems for general use.
Legal challenges arise in relation to the liability for damage caused by AI. Due to the opacity, complexity and autonomy of AI systems, liability for damages caused by AI cannot always be ensured under the current legal liability framework. Therefore, the EU Commission has brought forward proposals for a revised Product Liability Directive ("PLD Proposal") and for a directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) ("AILD Proposal") on 28 September 2022.
The PLD Proposal revises the narrower concepts of the existing PLD from 1985, confirming that AI systems, software and AI-enabled goods are 'products' within the scope of the PLD and ensuring that injured persons can claim compensation when a defective AI-based product causes death, personal injury, property damage or data loss. The proposal reduces the burden of proof on consumers by including provisions requiring manufacturers to disclose evidence as well as rebuttable presumptions of defect and causation. In order not to unduly burden potentially liable parties, the PLD Proposal maintains provisions for exemptions from liability due to scientific and technical complexity. However, the Council's amendments of 15 June 2023 to the PLD Proposal allow Member States to exclude such an exemption altogether. To address the increasing number of products that can (and sometimes even have to) be modified or upgraded after being placed on the market, the revised PLD will apply to re-manufacturers and other businesses that substantially modify products when they cause damage to a person. In this respect, challenges remain in relation to changes caused by self-learning AI systems.
The AILD Proposal complements the liability regime under the PLD by establishing specific rules for a non-contractual fault-based civil liability regime for damage caused by AI, including stricter rules for so-called high-risk AI systems.
As there is no sector-specific liability regime for medical devices these general liability rules will apply for AI-based medical devices.
How to Prepare:
To wrap up the event, the panel discussed practical strategies for organizations to prepare for the evolving landscape of AI and digital health, and provided actionable takeaways.
From a product safety and liability perspective, it is particularly important to the full potential scope of the use of AI and digitised processes in mind. Even seemingly small adjustments can make all the difference when it comes to liability issues. For this very reason, it is particularly important not only to implement comprehensive compliance systems, but also to assess potential impacts and risk mitigation and documentation measures for each product line, if not even product, with all stakeholders involved at an early stage.
In particular, deployers and developers of AI based medical devices should conduct a regulatory impact and risk analysis of all AI applications. Data and algorithm governance standards should be extended to include all data, models and algorithms used for AI throughout the lifecycle of a medical device.
Authored by Arne Thiermann, Nicole Saurin, and David Bamberg.
Supported by Lara Bruchhausen.