News

Five highlights from FDA’s new AI device regulation Action Plan

Image
Image

On January 12, the U.S. Food and Drug Administration’s Center for Devices and Radiological Health (CDRH) Digital Health Center of Excellence released its new five-part “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan,” which describes the agency’s efforts to regulate products that incorporate AI. It is a direct response to stakeholder feedback to the April 2019 discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device.” Although the Action Plan is light on details for AI regulation, it pledges specific actions that show FDA is moving forward with its “Predetermined Change Control Plan” regulatory framework for machine learning devices. The docket for comments on the plan remains open, and device manufacturers may be interested in providing feedback to FDA on its AI policy proposals, before the agency finalizes its regulatory framework.

The Action Plan is focused on SaMD, but FDA says it expects that some of this work may also be relevant to other medical device areas, including Software in a Medical Device (SiMD). The Action Plan is divided into five parts, which we discuss in turn below.

1. Tailored regulatory framework for AI/ML-based SaMD – Draft guidance to come

The discussion paper proposed a framework for modifications to AI/ML-based SaMD that relies on the principle of a “Predetermined Change Control Plan.” This plan would include the types of anticipated modifications — referred to as the “SaMD Pre Specifications” (SPS) — and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients — referred to as the “Algorithm Change Protocol” (ACP). In this approach, FDA expressed an expectation for transparency and real-world performance monitoring by manufacturers that could enable FDA and manufacturers to evaluate and monitor a software product from its premarket development through postmarket performance.

FDA said it believes this framework would enable the agency to provide a reasonable assurance of safety and effectiveness while embracing the iterative improvement power of AI/ML-based SaMD. In comments on the discussion paper, stakeholders provided feedback about the elements that might be included in the SPS/ACP to support safety and effectiveness as the SaMD and its associated algorithm(s) change over time. The Action Plan pledges that FDA will further develop this proposed regulatory framework, with efforts including the issuance of draft guidance in 2021 on a predetermined change control plan. FDA said the draft guidance will focus on the refinement of the identification of types of modifications appropriate under the framework, and specifics on the focused review, including the process for submission/review and the content of a submission.

2. Good Machine Learning Process (GMLP)

In comments on the discussion paper, stakeholders provided strong general support for the idea and importance of Good Machine Learning Practice (GMLP), and there was a call for FDA to encourage harmonization of the development of GMLP through consensus standards efforts and other community initiatives. FDA agreed in the Action Plan that it should support the development of GMLP to evaluate and improve machine learning algorithms, which the agency said will be pursued in close collaboration with FDA’s Medical Device Cybersecurity Program

3. Patient-centered approach incorporating transparency to users – Public workshop to come

Following up on FDA’s October 2020 Patient Engagement Advisory Committee (PEAC) meeting on AI/ML-based devices, the agency promised it will hold another public workshop on how device labeling supports transparency to users and enhances trust in AI/ML-based devices. The Action Plan says FDA “is committed to supporting a patient-centered approach including the need for a manufacturer’s transparency to users about the functioning of AI/ML-based devices to ensure that users understand the benefits, risks, and limitations of these devices.” FDA said it intends to gather additional input for identifying types of information that FDA would recommend a manufacturer include in the labeling of AI/ML-based medical devices.

4. Regulatory science methods related to algorithm bias & robustness

Many comments on the discussion paper emphasized the need for improved methods to evaluate and address algorithmic bias, and in response, the Action Plan acknowledges: “Given the opacity of the functioning of many AI/ML algorithms, as well as the outsized role we expect these devices to play in health care, it is especially important to carefully consider these issues for AI/ML-based products.” To show the agency’s prioritization of ensuring that medical devices are well suited for a racially and ethnically diverse patient population, FDA cited in the Action Plan its support of regulatory science research efforts to develop methods to evaluate the algorithmic robustness of AI/ML-based medical software. These efforts include collaborations with researchers at the Centers for Excellence in Regulatory Science and Innovation (CERSIs) at the University of California San Francisco, Stanford University, and Johns Hopkins University.

5. Real-world performance (RWP) – Pilot program to come

The discussion paper described the notion that to fully adopt a total product lifecycle (TPLC) approach to the oversight of AI/ML-based SaMD, modifications to these SaMD applications may be supported by collection and monitoring of real-world data. The Action Plan reasserted: “Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular marketing submission.”

Accordingly, in the Action Plan, FDA said it will support the piloting of real-world performance (RWP) monitoring by working with stakeholders on a voluntary basis. The agency asserted that evaluations performed as part of these efforts could be used to determine thresholds and performance evaluations for the metrics most critical to the RWP of AI/ML-based SaMD, including those that could be used to proactively respond to safety and/or usability concerns, and for eliciting feedback from end users. However, the Action Plan did not provide a timeline for such a pilot program.

*     *     *     *     *

AI/ML-based SaMD is a rapidly progressing field and the FDA anticipates this Action Plan will continue to evolve. In announcing the Action Plan, FDA said it “welcomes continued feedback in this area and looks forward to engaging with stakeholders on these efforts.” If you are interested in submitting a comment on the discussion paper or have questions about AI regulation more generally, please contact any of the authors of this alert or the Hogan Lovells attorney with whom you regularly work.

 

Authored by Jodi Scott, Kelliann Payne, John J. Smith, Alex Smith, and Megana V. Sankaran

Search

Register now to receive personalized content and more!