The impacts of artificial intelligence on processes and services in the health care sector raise promise and risks. Ontario's Office of the Information and Privacy Commissioner is trying to get ahead of the delicate landscape with an eye toward balancing safety and innovation.
In recent weeks, the IPC offered new guidance materials to help thread the needle.
First came the new Principles for the Responsible Use of AI, developed in coordination Ontario's Human Rights Commission. The IPC described the principles as tools to "develop, deploy, and use AI in ways that maintain public trust by respecting privacy and human rights."
Additional guidance from the IPC covered AI notetakers, or scribes, in health care settings and features a checklist for procurement professionals, developers and users in the health sector, which contains key considerations throughout the AI life cycle that should be weighed when evaluating potential AI solutions.
The new resources helped set the stage for a broader discussion on AI health care during a 28 Jan. workshop commemorating Data Privacy Day.
Delivering opening remarks at the workshop, IPC Commissioner Patricia Kosseim referenced a recent survey from the Canadian Medical Association and Canadian Federation of Independent Businesses in which 90% of the nearly 2,000 physicians surveyed reported a significant administrative burden in filling out paperwork that amounted to an aggregate of 20 million hours annually and detracted from their ability to care for patients.
Kosseim pointed to other responses that showed roughly half of physicians identifying AI as a potential solution for easing administrative tasks, while half of those surveyed also acknowledged "real privacy, security and legal risks" surrounding AI use in clinical settings. The survey also found that approximately one-third of physicians wanted help identifying and vetting various AI products.
The guidance on AI scribes "will help health professionals take a privacy-first approach focusing on core governance and accountability measures needed to protect personal health information and reduce the risk of bias and inadequacy," Kosseim said. "Together these two companion documents set out clear expectations and best practices to ensure compliance with Ontario's health privacy law, mitigate risks of harm and ultimately preserve trust."
Opportunities for integrating AI in health care
One theme from the day-long workshop focused on the potential benefits that could be realized for integrating certain aspects of health care services with AI.
St. Michael's Hospital Clinician-Scientist Dr. Amol Verma said AI uses in the health care sector primarily fall into four main categories: General AI, general clinical AI, clinical AI tools and AI that is embedded in medical devices.
Verma said general AI, such as generative AI models like ChatGPT, are being increasingly used by practitioners to query for basic medical information, instead of more traditional search engines like Google Search. Whereas general clinical AI may embed a specific health care system's information and data into an AI model to create a health-specific chatbot.
"The innovation is there, but it's uneven (in its distribution)," Verma said. "So now, we as a health care system have to look at that technology and say, 'We’re getting 10% of the people that use this are benefitting substantially, and that's meaningful.' How much are we willing to pay for that technology, and what are we substituting in our healthcare system to pay for that technology? Unless we have robust standards of rigorous evidence, we can’t make those decisions."
University of Ottawa School of Epidemiology and Public Health Professor and Canada Research Chair in Medical AI Khaled El Emam said in order to realize the greatest benefits from AI in medicine from both a delivery of care and innovation perspective, Ontario and Canada as a whole need to develop a "playbook" for reforming regulations around enabling greater access to medical data for both researchers and companies developing cutting edge AI solutions.
Part of this playbook would be reducing the timeframe for medical testing that involves an AI component to get answers sooner on a tool's efficacy.
"The technology moves fast," El Emam said. "If the gold standard is to perform controlled trials and (randomized controlled trials) to evaluate interventions, these take a long time to do. If you're going to spend a couple years evaluating a technology in the clinic, two years from now, who cares? Everything else has changed and something better is available."
Establishing relevant frameworks for enabling AI integration
To ensure AI does not impede general patient rights and the right to privacy, panelists agreed Ontario's health care sector must explore all its framework options.
University of Ottawa Canada Research Chair in Information Law and Policy Teresa Scassa said key considerations for crafting policies around using AI in health care are data provenance and the varying degrees of consent given for the data therein and setting standards for acquiring AI solutions to ensure they meet not only Canada's privacy laws, but Ontario-specific rules.
"There is a proliferation of vendors that are trying to attract new customers and holding out promises that their tools were compliant with different privacy laws, and that can get complicated because the doctors or health care custodians in Ontario are subject to very specific privacy laws and those might not be the same ones (they) are being certified as being compatible with," Scassa said. "The provenance of data that's used to train AI is an interesting and thorny question because it can come from a variety of sources and consent can be obtained in a variety of ways. There's data used without consent, and there may also be data that is used with consent but the consent was obtained in ways that aren't genuine."
In terms of disclosing AI uses in clinical health settings, IPC Senior Health Policy Advisor Nicole Minutti said data custodians must include the purpose for using AI, what data is shared with third parties and the reason for doing so, AI risks, such bias, and the safeguards the custodian has in place to safeguard protected health information. She referenced a survey conducted Office of the Privacy Commissioner of Canada last year that found 88% of Canadian citizens are concerned about their personal information being shared and used to train AI models, with 42% of whom were "extremely concerned."
"When we see this level of concern in the general public, it's inevitable that at some point data custodians are going to be asked about their use of AI systems," Minutti said. "They should be prepared to answer those questions."
Queen's University Dean of Law Colleen Flood argued AI used in health care can pose both clinical and privacy risks.
She said clinicians should not be faced with explainability requirements for patients, in terms of how a given large language used by the health care institution model functions. They should be required to explain the material risks the model may pose to patients in the form of automation biases or data leak risks. She said privacy risks stem from AI being used to re-identify deidentified data.
Another consideration for practitioners is ensuring their employers understand the terms of use contracts they are signing with AI vendors. Flood said some contracts are written so that all clinical and privacy liability falls on the health care provider and/or their institution.
"The desire for vendors will be to download all of that liability, privacy liability onto the clinician, so those contracts need to be carefully reviewed and considered," she added. "We need Big Bang (privacy) reform here: We over-assume the law does some things, it doesn’t do other things. It’s not working for what we need right now and we need to fix this."
In an interview with the IAPP following the workshop, the Commissioner Kosseim said the insights gleaned from the workshop will help inform the agency's approach to monitoring AI integration with the health sector. She said for developers, they must view the need to uphold patients' privacy as "not in conflict with innovation."
"As a regulator, we need to support iterative thinking so that we can help inform the risks being taken engage all of those interested parties to participate in that process," Kosseim said. "The theme coming out of today is the need for trust across the system: Trust in providers, patients' trust in health care institutions, and how important it is to continue to build that trust so when tools like AI scribes are introduced they are well governed and patients don't lost that trust that is so fundamental to our health care system."
Alex LaCasse is a staff writer for the IAPP.


