Hogan Lovells 2024 Election Impact and Congressional Outlook Report
Recent actions by state and federal lawmakers are trending toward increased regulation and oversight of uses of AI in health care. From patient communications to medical necessity determinations and more, legislators and regulators are focused and willing to act on the use of AI in the health care industry.
Several of the flurry of AI-related laws signed by California Governor Newsom on September 28, 2024, will impact health care companies. There are also similarities between these laws and recent regulatory changes at the federal level.
AB 3030 requires health facilities, clinics, physician’s offices, or offices of a group practice to present prominent disclosures about their use of generative AI to generate written or verbal patient communications pertaining to patient clinical information, effective January 1, 2025. Depending on the specific form of the communication, the “use of generative AI” disclaimer must be provided either at the beginning of the communication or throughout the communication. AB 3030 also requires that subject communications must provide clear instructions for how a patient can contact a human health care provider, employee, or other appropriate person. However, AB 3030 broadly exempts communications read and reviewed by a health care provider. Thus, where generative AI is used to create a patient communication but a provider then reviews the communications, the disclaimer is not required.
SB 1120 amends sections of California’s Health and Safety Code and Insurance Code that provide for the licensure and regulation of health care service plans by the Department of Managed Health Care (“DMHC”) and disability insurers by the Department of Insurance (“DOI”), effective January 1, 2025. Among other changes, these amendments establish specific requirements for how health care service plans and disability insurers in the state may use artificial intelligence, algorithms, or other software tools for utilization review or management.
The law defines “AI” as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments,” but does not define “algorithm” or “other software tool.” This definition aligns with the uniform definition of AI now imposed under California law in accordance with AB 2885 which was also signed by the Governor in September.
SB 1120’s changes apply broadly to health care service plans, disability insurers, and to any vendor contracted with such entities for the provision of utilization review or management functions. The amendments require that any utilization review or management based on medical necessity may be performed only by a licensed physician or other licensed qualified health care professional competent to evaluate the specific clinical issues involved in the requested health care services, by reviewing and considering the requesting provider’s recommendation, the enrollee’s medical or other clinical history, and individual clinical circumstances.
These changes also impose specific requirements on a health care service plan or disability insurer to ensure that any AI, algorithm, or other software tool used in utilization review or management:
bases its determination on the enrollees medical or other clinical history, individual clinical circumstances presented by the requesting provider, and other relevant clinical information contained in the enrollee’s medical or other clinical record;
is fairly and equitably applied;
is open to inspection for audit or compliance reviews by DMHC, DOI, and by the Department of Health Care Services pursuant to applicable state and federal law;
complies with applicable state and federal law;
is periodically reviewed (including performance, use, and outcomes) and revised to maximize accuracy and reliability; and
does not:
use patient data beyond its intended and stated purpose;
base its determination solely on a group dataset;
supplant health care provider decision making;
discriminate directly or indirectly against enrollees in violation of state or federal law; or
directly or indirectly cause harm to the enrollee.
These restrictions appear to apply to prospective, retrospective, or concurrent reviews of requests for covered health care services. Additionally, entities subject to these requirements must confirm that disclosures about the use and oversight of such tools are included in written policies and procedures about utilization review or management activities.
On its face, the amendments appear to permit AI, an algorithm, or software tool to be used in medical necessity determinations so long as a physician or competent health care provider reviews and makes the final determination with respect to the medical necessity review. However, they clearly seek to impose restrictions on how such tools are used. That said, health care service plans and disability insurers subject to the law may have questions about how specifically to implement these broad, and arguably vague requirements. It will be important to monitor DMHC and DOI as they implement this law to see if they provide further guidance.
Specific requirements regarding when and how AI can be used for medical necessity were also imposed at the federal level and explicitly noted in the preamble and FAQs for the Final Rule by the Centers for Medicare & Medicaid Services (“CMS”). CMS’s requirements are already effective for coverage beginning on January 1, 2024, and require Medicare Advantage Organizations (“MAOs”) to make medical necessity determinations based on the circumstances of specific individuals, as opposed to relying on an algorithm or software that does not consider individual circumstances. In its FAQs, CMS clarified that AI, algorithms, and software can be used to assist plans in making coverage determinations, but they must comply with applicable rules for how coverage determinations are made, including basing the decision on the individual patient’s circumstances, and the AI or algorithm alone could not be used as the basis to deny coverage.
AB 2013 requires AI developers of generative AI systems or services that are made publicly available to Californians after January 1, 2022 to post information about the data used to train the AI systems or services on their websites by January 1, 2026. The law applies not only to the original developers of an AI system or service, but any person or entity meeting the definition of “developer” that “substantially modifies” a generative AI system or service, including new versions, releases, re-training, and fine-tuning that materially changes functionality or performance. The law includes a few narrow carve outs. For example, “affiliates” (separately defined as any entity that, directly or indirectly, through one or more intermediaries, controls, is controlled by, or is under common control with, another entity) and “a hospital’s medical staff member” are explicitly excluded from the definition of developer. Generative AI systems and services that have a sole purpose to help ensure security and integrity or operate aircraft in the national airspace, or that are made available only to a federal entity for national security military, or defense purposes are entirely out of scope. However, there is not a broader carve out for uses in public health, health care setting, research, or by employees of other health-related institutions.
Unlike other recent bills that similarly applied across industries (notably SB 1047 – Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which was vetoed, and SB 942 – California AI Transparency Act, which was passed), AB 2013 is not targeted at the large, particularly powerful AI models. Such broad reach could have implications for companies in the health care sector if they create their own generative AI systems or services, or if they engage in material re-training and/or fine-tuning of an existing generative AI model to create their own version that is offered publicly to Californians. The law requires specific disclosures about the data used to train the AI, such as the types, sources, purpose, and timing of data used to train these tools and highlights increased attention and requirements around transparency of training data. It will also require companies to specifically state whether training data sets include any data protected by IP rights, were entirely in the public domain, purchased, or licensed, and whether it includes personal information as defined by the California Consumer Privacy Act.
The Department of Health and Human Services (“HHS”) Office of the National Coordinator for Health Information Technology (“ONC”) similarly requires transparency for training data used as part of predictive decision support intervention (“DSIs”) under its Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (“HTI-1”) Final Rule. Among other requirements, which became effective on March 11, 2024, HTI-1 requires Health IT Modules certified to the DSI criterion to enable users to access information about the design, development, training, and evaluation of Predictive DSIs, including descriptions of training data and information on whether the Predictive DSI was tested and evaluated for fairness. It also requires developers of certified health IT to apply risk management practices for all Predictive DSIs that are supplied by the developer of certified health IT as part of its Health IT Module and make summary information regarding these practices available publicly.
Increased Attention on Health AI
There is increased attention from legislators and regulators on the use of AI in health care. Not only are they holding hearings and publishing new rules, they are also empowering state and federal agencies to take action to further compliance and enforce responsible use of health AI.
Health care providers, insurers, and vendors using health AI will need to:
identify and assess their uses of health AI, confirming they understand where and how the health AI was developed, is trained, and has been deployed;
evaluate their existing external notices and internal documentation to confirm it addresses and complies with new requirements;
conduct risk assessments and ongoing auditing and monitoring of their uses of health AI; and
closely monitor developments in this space to continuously assess how their activities comply with the evolving patchwork of laws, regulations, and guidance regarding the development and use of AI at the state and federal levels.
Authored by Marcy Wilder, Melissa Bianchi, Alyssa Golay, and Jessica Hanna.