2024-2025 Global AI Trends Guide
President Biden’s groundbreaking Executive Order on artificial intelligence carries significant implications for the health and life science industry. The Order tasks federal agencies, including those responsible for health industry oversight, with developing responsible AI guidelines and meaningful measures to regulate and assess its use. Although the Order focuses largely on the government’s role in the use of AI, the impact will be felt throughout the industry given the standard setting that it will spark.
For the health and life science sector, key developments include:
The HHS AI Task Force and Standard Setting
As outlined in our previous publication, the Order directs various federal agencies to set new standards for AI safety and security, safeguard Americans’ privacy, advance equity and civil rights, support consumers and workers, and promote innovation and competition. Recommendations put forth by these agencies will carry weight and influence the development of industry standards and potentially regulations in this rapidly developing area. A number of the agencies regulate the health and life sciences sector.
Notably, among the agencies tapped for standards setting is the U.S. Department of Health and Human Services (HHS). Under the Order, HHS will establish an AI Task Force that will develop a strategic plan for the responsible use of AI and AI-enabled technologies in health care. At a minimum, the plan will address:
Organizations in the health and life science industry may monitor the task force’s activities and develop their own processes to evaluate issues related to the task force’s priorities. For example, developers or users of AI models that process personally identifiable health information can put in place processes for evaluating the privacy impact of those models and the safeguards in place for securing such data. That work can help position the organization to adapt more readily to standards emerging from the task force.
Acute Impact for Government-funded Research
Organizations that participate in government-funded projects will be among the first to feel the Order’s impact. For example, the Order directs HHS to prioritize grantmaking and awards on the responsible development and use of AI. Some of the same issues central to the AI Task Force’s strategic plan – like appropriate implementation of safety, privacy, and security standards – likely will be part of the decision-making process. And as agencies implement the Order’s directives, they may place a strong emphasis on protecting sensitive research data, including through stringent protocols for data encryption, access controls, and secure storage to safeguard against unauthorized access or data breaches. Government-funded entities will be incentivized to thoroughly screen AI models and implement practices to protect sensitive data processed by those models. To secure and maintain federal funding, organizations also will need to be prepared to integrate agency standards and guidelines into their research and development processes.
Considerations for AI Developers
AI developers should be aware that the Order that may foreshadow government action, which need not await the new standards directed by the Order. In fact, some agency efforts preceding the Order are already having an impact. HHS, including through the Office of the National Coordinator for Health Information Technology, has proposed rules regulating the use of algorithms in clinical decision-making and predictive decision support interventions within health care.
The Order underscores the federal government’s commitment to guarding against negative consequences from the use of AI in health care, including through the use of existing tools. For example, the Order signals that regulators will be monitoring new health and safety risks introduced by AI and makes clear that the federal government will enforce consumer protection laws and principles in various industries, including in health care, and will not tolerate discrimination and bias in health care. These signals align with other developments in the evolving data and consumer protection environment in the U.S., such as the emergence of state consumer privacy laws that regulate certain forms of automated data processing. And, for developers, they serve as a reminder on several key considerations:
Next Steps
It’s important to identify the regulatory agencies with oversight over your business and carefully watch AI developments within those agencies. Monitoring standards set, and recommendations made, by these agencies can help organizations more swiftly align operations, technologies, and processes with emerging industry practices and regulations, secure government-funding, and remain a trusted actor in the evolving AI health care landscape.