Across labs and clinics worldwide, scientists are harnessing artificial intelligence to improve diagnostics, personalize treatment pathways and reimagine the modes of care delivery we once thought fixed. From AI-enhanced imaging that catches early-stage tumors to predictive analytics that anticipate disease progression, the scientific community is delivering unprecedented innovations. In the long-term, novel AI-enabled therapies and groundbreaking insights at the molecular level seem within reach, as well.

As scientists work wonders with AI, they also enjoy widespread credibility in society. This is remarkable, given historically low public confidence in many institutions. Consider that 76% of Americans say they have a fair amount or a great deal of confidence in scientists to act in the public interest — a higher rating than for elected officials, journalists or business leaders. Separate polling has found that health care providers, researchers and Centers for Disease Control and Prevention rank among the most trusted sources of public health information.

So, when we talk about AI governance in domains as sensitive and high stakes as health care and life sciences, there is a tremendous opportunity to advance the use of AI with professionals who are already trusted.

How to maintain trust in health AI

While general trust in health care professionals is high, trust in AI is relatively low. Whether due to health already being a heavily regulated industry, or the recognition by health care workers that regulations support increased trust, rules and best practices for the use of AI in health care is burgeoning.

Customers have shown openness to AI-powered health care and advice — but they demand human oversight, transparency and good user experience. Biased and poorly implemented AI systems have already resulted in lawsuits, fines and reputational impacts for major health providers.

As a result, policymakers in many jurisdictions are providing ground rules for the governance of AI in health settings. The most recognized AI legislation, the EU AI Act, classifies many health AI use cases as high-risk, while promoting harmonization with other EU regulations, including the Medical Device Regulation and the In Vitro Diagnostic Regulation. Though the exact modalities of harmonization continue to be worked out, organizations have relatively clear guidance on what regulators and the public will expect.

Japan has also adopted a comprehensive policy framework for AI use. Its AI Promotion Act outlines an agile approach to AI regulation for health care uses, with leadership from the Ministry of Health, Labour and Welfare and the Pharmaceuticals and Medical Devices Agency.

In the U.S., though there is no general privacy or AI law at the federal level, the Food and Drug Administration has been leading the charge on AI governance in health care and life sciences for several years. In July, it announced two streamlined AI councils: one focused on how the agency will use AI internally — including to review an increasing volume of new use cases and modes of care delivery — and another focused on providing AI governance oversight externally. These councils will need to enable AI innovation while ensuring rigorous oversight of health AI use cases.

Principles for good governance in AI and health care

According to emerging best practices from policymakers and regulators, AI governance practitioners could consider the following principles to ensure good governance for AI systems in health care:

  • Insist on high standards: Recognize that health AI, and especially its high stakes use case, requires strict oversight.
  • Ground rules in evidence: Incorporate the scientific community's expertise into policy discussions and governance frameworks.
  • Build on proven principles: Leverage ethical frameworks in health care and life sciences frameworks, as well as globally agreed-upon AI principles from the Organisation for Economic Co-operation and Development and G7, to inform AI governance efforts.
  • Foster transparency without throttling innovation: Ensure models and data are explainable and auditable, while supporting research and cross-border collaboration.
  • Consider equity and usability: AI should serve all communities fairly — not replicate existing disparities.

Next steps AI governance for health care

AI in health care and life sciences holds tremendous promise. Sectoral regulators and health researchers who have examined how existing rules for AI systems apply in the health care context have often found that AI regulations are broad in nature and that specific guidance and standards will be required to bring the vast number of potential uses of AI in health care to market while maintaining customer trust.

Many health organizations, including the World Health Organization, have started to develop comprehensive work to understand the implications of AI in the health sector. This includes a framework to scope the use of AI in health as well as a series of strategies and guidance to support those using and working with AI in the health sector. 

Ashley Casovan is the managing director, AI Governance Center for the IAPP.

Val Shankar is the chief AI and privacy officer for Enzai.

Editor’s note: As a professional association supporting AI governance professionals working in health, the IAPP has started to explore how we can better support these individuals. This year, at our AI Governance Global conferences, we have included instructional workshops for AI governance in health care. The next instructional workshop will be held in Boston, 17 Sept. This is a topic we will continue to follow and report on.