News

FDA AI & Medical Products Paper advocates for a tailored risk-based regulatory framework

Image
Image

The Food and Drug Administration’s (FDA’s) Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) recently published a joint discussion paper on artificial intelligence (AI) and medical products that aims to provide greater transparency on how FDA’s medical product Centers are collaborating to safeguard public health, while at the same time fostering responsible and ethical innovation.

Since 1995, FDA has received 300 submissions for drugs and biological products with AI components and more than 700 submissions for AI-enabled devices, according to FDA Commissioner Robert Califf, MD, in a recent FDA Voices article. These submissions involve integrating AI in various contexts, including drug discovery and repurposing, enhancing clinical trial design elements, dose optimization, endpoint/biomarker assessment, and postmarket surveillance. The Discussion Paper is the agency’s most recent publication on AI and medical products since it announced its intent to publish discussion papers at last year’s virtual public workshop on the “Application of Artificial Intelligence and Machine Learning for Precision Medicine”, which we summarized online here. Previous agency publications include discussion papers on AI in drug development and advanced manufacturing, which we discussed online here.

We anticipate that the various Centers will continue to issue guidance as technology evolves, but stakeholders engaged in developing innovative medical products that use AI are reminded to take time now to understand and prepare for FDA’s risk-based regulatory approach for AI management across the product’s lifecycle.

Key cross-Center takeaways

The Discussion Paper states that AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and tailored to the relevant medical product. The agency’s approach encompasses the total AI lifecycle from ideation and model training to real-world implementation, monitoring, and maintenance.

The Discussion Paper highlights four goals for cross collaborations between agency Centers and Offices, namely:

  1. Fostering collaboration with developers, patient groups, academia, global regulators, and other interested parties to cultivate a consistent, patient-centered regulatory approach that safeguards public health.

  2. Advancing the development of regulatory approaches that support innovation, including:

    1. monitoring and evaluating trends and emerging issues to detect knowledge gaps; and,

    2. publishing additional guidance documents:

      1. Final guidance on marketing submission recommendations for predetermined change control plans for AI-enabled device software function;

      2. Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions; and,

      3. Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.

  3. Promoting the development of standards, guidelines, best practices, and tools across the product life cycle including building on the Good Machine Learning Practice Guiding Principles to:

    1. promote safe, responsible, and ethical AI use,

    2. identify best practices for long-term safety and real-world performance of AI-enabled medical products,

    3. develop best practices for evaluating whether training data is fit for use, and

    4. create a framework for quality assurance of AI-enabled tools used in the Total Product Life Cycle (TPLC).

  4. Supporting research related to the evaluation and monitoring of AI performance to gain insights into AI’s impact on medical product safety and effectiveness by supporting AI projects that demonstrate:

    1. where bias can be introduced in the AI development life cycle and how it can be addressed,

    2. how health inequities are associated with AI in medical product development to promote equity, data representativeness and other ongoing diversity, equity, and inclusion efforts, and

    3. how ongoing monitoring of AI tools in medical product development ensures adherence to standards and maintains performance and reliability through the TPLC.

How manufacturers can prepare for further FDA AI regulation

As medical product manufacturers prepare for future FDA guidance on responsible AI use in medical product development, manufacturers should consider developing policies and procedures that provide governance for the responsible use of AI and other digital tools used in medical product development (“digital tools”). While the regulatory landscape for AI and digital tools is still evolving, industry best practices and guidelines from entities like the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework discussed here, Health Sector AI Commitments referenced by the White House, and the Coalition for Health AI (CHAI)’s AI in health care guidelines. CHAI, recognized by Commissioner Califf as a community that will inform FDA’s thinking on AI, released a guidance document which may help manufacturers identify relevant considerations for AI and digital tool procedures and policies, entitled “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare.”

Specifically, the guidelines highlight the following areas as key elements of trustworthy AI: (1) utility, including validity, reliability, testability, usability, and benefit; (2) safety; (3) transparency and accountability; (4) explainability and interpretability; (5) fairness and bias mitigation; (6) security and resilience; and (7) privacy. These broadly align with principles of other agencies and industry groups. For example, the “FAVES” principles – that AI should lead to health care outcomes that are Fair, Appropriate, Valid, Effective, and Safe – cited by the U.S. Department of Health and Human Services (HHS) in its recent rulemakings.

While the legal landscape for regulation of AI and other digital tools in medical product development remains uncertain, manufacturers may consider developing policies that address:

  • Bias mitigation and generalizability: Procedures on AI should set clear guidelines for how the manufacturer will acquire, select, evaluate, and analyze data used to train and fine-tune AI to ensure it is fit-for-purpose, explainable, reliable, unbiased, and interpretable. Furthermore, they should ensure these procedures specify how AI will be developed, trained, tested, deployed, and updated to mitigate bias, address identified issues, and support generalizability. Policies and procedures should account for and mitigate systemic, computational and statistical, and human-cognitive biases inherent in the particular training data sets, fine-tunning activities, and application of AI in medical product development.

  • Transparency and auditability: Policies on AI should clarify how manufacturers will document, audit, and record information related to AI and other digital tools to promote responsible AI use and comply with relevant federal and state regulations including 21 CFR Part 11. This includes developing appropriate internal materials for teams developing and supporting these technologies as well as external materials for customers, users, and those whose data may have been involved in research and development activities. In the context of AI development and use, manufacturers should evaluate whether the intended use and design of AI models are clear to users and AI decision making pathways should be documented in a way that promotes traceability.

  • Consent, training, and access: A major ethical consideration for the use of AI and other digital tools in medical product development is ensuring that users (including relevant stakeholders, like the manufacturer’s employees, patients, caretakers, health care providers, and academicians) understand, are educated on, and have access to the AI and other digital tools utilized, especially when these technologies are used in settings where they may pose a greater risk to public health. Manufacturers should consider developing an approach to informed consent that includes providing transparency to users regarding the kinds of data that will be collected, the potential uses of this data — including for model training and fine-tuning activities, the risks inherent in the collection and use of data, and the methods by which an individual may ask questions, exercise their rights, or communicate concerns to the manufacturer. Many of these are consistent with existing privacy requirements or principles, including some newer state privacy laws. Manufacturers should consider types of training that may be appropriate for different stakeholders on the relevant technologies and, where necessary, manufacturers document concrete steps taken to help ensure access (where applicable), data integrity, and bias mitigation. Manufacturers should also consider engaging with relevant stakeholders to understand gaps in stakeholders’ training, and awareness of what data may be used to support these technologies, and where/how they may be used.

  • Third parties: Manufacturers should evaluate their relationships with third parties, such as service providers and vendors, so that they are able to subject third parties to policies and procedures which promote responsible AI use, risk management, use limitations, and data or intellectual property-related rights and obligations.

  • Privacy and cybersecurity: When developing and utilizing any AI or other digital tool, medical product manufacturers should establish appropriate rights and guardrails to appropriately collect, use, disclose, and safeguard data, especially data generated from individuals, including patients. Depending on the data, uses, and entities involved in the particular application of AI or other digital tool in medical product development, the risks and requirements may vary. Manufacturers should ensure their practices, policies, and procedures align with relevant state and federal regulations as well as evolving regulator and industry guidance.

  • Independent input and oversight: Manufacturers should consider engaging with technology experts, academics, the medical community, and patients to gain independent insight on how to improve ideation, design, development, distribution, implementation, monitoring and maintenance of AI and other digital tools.

Future outlook

The Discussion Paper suggests that FDA is considering what roles each product Center will play in regulating use of AI in medical product development, regardless of whether AI is the “end product.”  It is likely that FDA’s approach to building capacity to regulate AI among its product centers may involve developing new Centers of Excellence, like CDRH’s Digital Health Center of Excellence and CDER’s Quantitative Medicine (QM) Center of Excellence, announced on March 25, 2024, as an effort to streamline QM-related policy development and best practices, including the use of “innovative technologies, tools and approaches.”

While the Discussion Paper focuses on AI use in drugs, biological products, and devices, a March 2024 blog post published by Dr. Califf, suggests that FDA may be evaluating the impact of digitization, AI, and growth in computing power in other FDA-regulated areas, like nutrition and food safety. It is clear that FDA is focused on building its expertise on the use of AI in FDA-regulated product development prior to publishing further guidance. Based on CBER’s AI/ML page, it can be expected that FDA’s regulatory framework for AI in medical product development will follow the approach and guidelines in the October 2023 Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which we analyzed here.

 

As AI begins to play a larger role in FDA’s regulatory agenda, Hogan Lovells will continue to monitor and evaluate updates to FDA guidance, legislative developments, industry trends, and emerging issues that may impact medical product manufacturers. If you have any questions on the development and use of AI in medical devices more broadly, please contact any of the authors of this alert or the Hogan Lovells lawyers with whom you regularly work.

 

Authored by Melissa Bianchi, Blake Wilson, Alyssa Golay, and Ashley Grey

Search

Register now to receive personalized content and more!