Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Artificial intelligence is no longer a futuristic promise in health care. It has already arrived.

In 2018, the U.S. Food and Drug Administration authorized the first autonomous AI diagnostic system for diabetic retinopathy, allowing an algorithm to evaluate retinal images without requiring a clinician to interpret the results.

That approval was more than a regulatory milestone; it signaled AI's entry into mainstream clinical practice, underscoring how algorithms are beginning to influence decisions about diagnosis, treatment and patient management.

The excitement surrounding AI in health care comes from its ability to make medicine faster, more precise, and in some ways more personal. Algorithms trained on large datasets can detect patterns in radiology scans that elude even seasoned specialists, flag patient deterioration before it becomes obvious, and suggest interventions that reduce costly hospitalizations.

In the administrative realm, AI-powered automation is starting to relieve clinicians of paperwork, scheduling tasks and billing complexities, freeing up more time to spend at the bedside, which patients consistently say they value most.

Beyond efficiency, researchers see AI as a key driver of personalized medicine. Instead of treating all patients with the same standard protocols, machine learning can analyze genetic, lifestyle, and clinical data to tailor interventions to the individual. For example, cancer therapies can be more precisely matched to tumor profiles, and risk prediction models can help identify who would benefit most from preventive measures.

Still, as academics point out, while the potential of personalized medicine is extraordinary, it is also accompanied by technical, regulatory and ethical challenges that demand careful governance.

The very qualities that make AI transformative also make it risky. Health care algorithms are only as good as the data they are trained on, and when those data underrepresent certain populations, the outputs may amplify inequities. A 2023 study highlighted how racial and ethnic bias in health care algorithms can skew resource allocation, diagnostic accuracy and treatment recommendations. The authors emphasized that without deliberate safeguards, AI may end up reinforcing the disparities it was meant to reduce.

Bias is not the only concern. Privacy is also at stake when vast quantities of sensitive data are used to fuel innovation. Patients need assurance that their most intimate information will not be exploited, misused or shared beyond what is necessary.

Existing privacy frameworks provide some protection but fall short of addressing AI's complexity. In the U.S., the Health Insurance Portability and Accountability Act Privacy Rule sets standards for how protected health information can be used and disclosed, but it was written long before machine learning became a factor in clinical decision making. It does not fully account for secondary uses such as training algorithms or the expectation that patients should understand and consent to AI-driven influences on their care.

In Europe, the EU General Data Protection Regulation provides a more comprehensive model, requiring data minimization, explicit legal bases for processing, and rights for individuals to access, correct and even erase their data. Still, the tension between protecting privacy and enabling research remains a persistent challenge.

Recognizing the gap, governments and regulators are beginning to act. The EU Artificial Intelligence Act introduces a risk-based framework that classifies medical AI as "high risk." This designation would trigger heightened requirements for transparency, testing, monitoring and human oversight, creating a higher compliance bar for health care applications.

Meanwhile, in the U.S., the National Institute of Standards and Technology has introduced its AI Risk Management Framework. This framework does not impose legal mandates but offers a structured process for identifying risks, mitigating harms, and promoting trustworthy AI across the life cycle. Together, these initiatives reflect a shift toward embedding governance into the design of technology rather than retrofitting rules after deployment.

The medical profession itself is not waiting on regulators. The American Medical Association recently issued a set of principles for the development and deployment of health care AI, insisting the technology must be transparent, accountable, equitable and respectful of patient privacy. Importantly, the AMA emphasizes that AI should augment clinical judgment, not replace it, reinforcing the principle that human decision-making remains central to care. This professional stance is vital because it frames AI not as a threat to clinicians but as a tool to enhance their expertise and strengthen patient relationships.

The road ahead will require active collaboration across stakeholders. Health care organizations must negotiate clear contracts with vendors, spelling out rights over data, model transparency, and accountability for outcomes. Compliance officers and privacy counsel need to be involved early, not brought in at the end, so that governance becomes part of the innovation process rather than an obstacle to it. Clinicians need training to understand when to rely on an AI tool, when to challenge it, and how to explain its role to patients in ways that inspire confidence. And patients, above all, must feel that their dignity, autonomy and privacy are respected.

Trust in health care AI use will not be built through marketing campaigns or regulatory fine print. It will be earned in the exam room, the hospital ward, and the home visit, moment by moment, as patients experience whether technology is truly serving their needs. If AI enables earlier diagnoses, more personalized treatments, and more meaningful conversations with clinicians, trust will grow. If it creates confusion, obscurity, or suspicion about how data are being used, trust will diminish.

The question is not whether AI belongs in health care. It does, and it will. The real question is how it will be shaped: whether we treat privacy, equity and accountability as afterthoughts or as foundations. The answer will determine whether AI strengthens the bond between patient and provider or undermines it.

With thoughtful governance, rigorous oversight and shared commitment to transparency, AI can fulfill its promise to not only revolutionize health care but to do so in a way that honors the values on which medicine is built. That is how innovation and trust can advance together.

Liyue Sigle, AIGP, CIPP/E, CIPP/US, is a privacy and AI counsel. Dr. Monia Reding is a vitreoretinal surgery fellow at Oregon Health & Science University.