2024-2025 Global AI Trends Guide
The U.S. Biden administration announced Thursday that 28 health care providers and payers have signed the White House’s voluntary commitment aimed at ensuring the safe development of artificial intelligence (AI), adding to the prior commitments of 15 tech firms to develop AI models responsibly. This latest announcement highlights the rapid adoption of quickly-evolving regulatory paradigms for AI, and the need for health care systems and others in the life sciences sector to consider steps to incorporate the White House’s Executive Order on AI, among other existing artificial intelligence guidance.
In a fact sheet released alongside the announcement of 28 new signees to the voluntary commitment, the White House outlines how the companies signing on have agreed to:
Vigorously develop AI solutions responsibly, including by optimizing health care delivery and payment by advancing health equity, expanding access, making health care more affordable, improving outcomes through more coordinated care, improving patient experience, and reducing clinician burnout.
Work with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles, as established and referenced in HHS’ Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule, which was just finalized on Wednesday.
Deploy trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human.
Adhere to a risk management framework that includes comprehensive tracking of applications powered by frontier models and an accounting for potential harms and steps to mitigate them.
This announcement comes amid a wave of heightened regulatory concern over the use of artificial intelligence, including the Biden administration’s Oct. 30 issuance of Executive Order 14110, the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which outlined dozens of actions, including many for which the U.S. Department of Health and Human Services (HHS) is responsible. Notably, the EO requires developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the government before releasing them to the public.
In addition, the Biden administration’s Financial Stability Oversight Council, an interagency group led by Treasury Secretary Janet Yellen, identified AI as a vulnerability in the financial system for the first time this year as part of its annual report. “AI can introduce certain risks, including safety and soundness risks like cyber and model risks," reads the report analysis published Thursday.
The administration’s latest AI fact sheet also provides a litany of examples of recent HHS AI-related regulatory actions. “The administration is pulling every lever it has to advance responsible AI in health-related fields,” the White House said in announcing the latest voluntary commitments, adding: “Without appropriate testing, risk mitigations and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best - and dangerous at worst.”
If you have any questions on ensuring compliance with the White House’s latest Executive Order on AI, HHS’ HTI-1 final rule, or regarding your policies covering artificial intelligence systems more generally, please feel free to contact any of the authors of this alert or the Hogan Lovells attorney with whom you regularly work.
Authored by Thomas Beimers, Melissa Bianchi, and Ron Wisor