2024-2025 Global AI Trends Guide
At our recent Health Care AI Law and Policy Summit, Hogan Lovells partner Melissa Bianchi moderated a panel discussion on the state of the health care AI industry. Joined by representatives from the American Medical Association and the Consumer Technology Association, the panel discussed how “artificial intelligence” is being defined today, some of the standards used to assess it, and the unique data challenges that arise in the course of AI development.
Teeing up the Summit’s discussion of the state of regulatory affairs for health care AI, Melissa Bianchi, partner in the Hogan Lovells health practice and leader of the firm’s digital health initiative, speculated on the potential for artificial intelligence to revolutionize the health care industry, including its ability to help reduce waste, streamline payments, and better diagnose patients. Yet, she noted, questions remain over how the industry can best harness that potential, and what barriers exist preventing its potential from being fully realized.
The panelists first discussed how AI should be defined, with Kathleen Blake, MD, MPH, senior advisor to the American Medical Association (AMA), beginning by emphasizing how AI product sponsors have an obligation to show that the population in which the evidence was developed represents the communities across the globe where the technology will be deployed and that population-specific evidence will be needed to safely extend the use of AI to new communities. Dr. Blake emphasized that AI needs to show that it will enhance equity for all, and that it will implicate meaningful patient outcomes.
Following up on Dr. Blake’s remarks, Kerri Haresign, director of Technology & Standards at the Consumer Technology Association (CTA), mentioned her organization’s two published AI standards, which look at definitions of AI, and the importance of trustworthiness in AI. Ms. Haresign cautioned that “we get stuck when we try to define AI broadly,” differentiating between “assisted intelligence” and “autonomous intelligence,” with the latter category not requiring human intervention. To define AI, Dr. Blake said that the AMA sees AI as about “augmented” intelligence, recommending that the focus be on the incremental gain from new technology.
Spotlighting the relevance of setting standards in AI, Ms. Bianchi described how part of the goal of standards is to “build towards something that can be regulated and foster more efficient, faster approvals.” Dr. Blake urged industry stakeholders to include patients early on in designing studies.
Moving the discussion to the challenges associated with obtaining the large, high quality data sets that are needed to build AI, Ms. Bianchi pointed out HIPAA was drafted so long before many of the innovations today, that it now creates challenges for accessing data sets. Echoing this concern, Dr. Blake described the “almost chimeric” competing goals of HIPAA, which aims to promote broader access by patients to their data, while also ensuring data privacy. Dr. Blake promoted more automatic capture of data, patient-entered data, and increased access for patients to examine their data and correct errors.
At CTA, Ms. Haresign said, industry has recognized the importance of proper treatment of health data and published industry practices to address this. Pointing out there are challenges associated with health care providers being able to trust the data used in AI algorithms, Ms. Bianchi questioned the panel on what issues they have seen arise with accountability and bias. Dr. Blake responded that “trust is comparable with explainability,” explaining how clear labeling can help resolve this dilemma. Ms. Haresign noted that the level of trust required for an AI product corresponds to the level of risk associated with that drug or device.
You can view video recordings and summaries of the other panels from the Health Care AI Law and Policy Summit online here: