The opportunity for privacy professionals to apply their skills to help organizations identify and manage the risks posed by artificial intelligence tools was a major theme at the recent IAPP Global Privacy Summit 2023 which we attended in Washington, D.C.

The particular challenges posed by generative AIs, such as ChatGPT4 and DALL-E, weighed heavily on the minds of many panelists and keynote speakers, including author and generative AI expert Nina Schick and U.S. Federal Trade Commissioner Alvaro Bedoya. In his keynote, Bedoya gave his personal opinions on these tools, including:

  • Their outputs are a mimicry of human thoughts and creativity, rather than actual intelligence and creativity, yet still (occasionally) capable of being full of wonder.
  • Large language models are capable of unpredictable performance, are often inexplicable, and manage to scare many of their developers.
  • AI is already regulated under a variety of existing US laws, such as consumer protection law (unfair and deceptive trade practices), civil rights law, tort and product liability laws, and common law causes of action. The complexity or inexplicability of such decision-making systems is not regarded as a valid defense under the law.

He concluded "automated systems new and old are routinely used today to decide who to parole, who to fire, who to hire, who deserves housing, who deserves a loan, who to treat in a hospital – and who to send home. These are the decisions that concern me the most. And I think we should focus on them."

Those who attended the 2019 IAPP ANZ Summit may remember the panel session on “AI – what’s the drama? Applying accountability to new tech.” Privcore hosted the panel with the former Australian Human Rights Commissioner Ed Santow, Microsoft and Macquarie University. Panelists discussed the opportunities for privacy professionals to apply their expertise conducting privacy impact assessments to assess and mitigate AI risks. At the 2023 Global Privacy Summit, several panels continued discussions of AI in the context of privacy and anti-discrimination. Panelists discussed the challenges of training and deploying a single AI across national borders where different biases are prohibited under anti-discrimination laws. The need to discuss with regulators the ecological cost of training and deploying multiple AI systems necessary to comply with those different national anti-discrimination laws was also raised.

Asia Pacific privacy experts should be prepared for the opportunities and challenges the deployment of AIs (whether decision-making systems or generative tools) pose in the region. Many have been trained predominantly on foreign datasets. Privacy professionals are well placed to identify and mitigate the risks such tools may create when transposed into the region.

For example, much work has already been done in Australia to encourage AI development with appropriate safeguards in place. The Information and Privacy Commission New South Wales released in late 2022 an overview of the AI regulatory landscape in Australia and globally. The IPC outlined the privacy risks AIs can introduce and how these can be addressed. Recently, the Australian government established a National AI Centre to further develop Australia’s AI and digital ecosystem, including the responsible use of AI.

The Australian Department of Industry, Science and Resources also took oversight of Australia’s AI ethics framework first developed in 2019. That framework guides businesses, governments and other organizations to responsibly design, develop and use AI. With the change in government, the department is also now responsible for the consultation on “Positioning Australia as a leader in digital economy regulation (automated decision making and AI regulation).” The Department of Prime Minister and Cabinet initially released the issues paper in 2022. A discussion paper is expected with reform proposals to be released for public consultation.

Top photo: IAPP Global Privacy Summit 2023 Keynote Speaker Nina Schick