A new working group from one of the most significant international policy institutions in the world aims to bridge the gap between the artificial intelligence and privacy communities.

The main goal of the Organisation for Economic Co-operation and Development's Expert Group on AI, Data and Privacy is to show where the bridges already exist among the group's three pillars. The intergovernmental group is the latest offering from the OECD.AI initiative, which tracks AI incidents and runs an AI policy database — all with the aim of creating a collective repository. Now, members from the AI and privacy worlds will discuss the challenges they face in regulating those issues and work toward cooperative solutions.

"This group is really meant to harness the synergies and learnings from both communities, so that we can learn from each other and forge a way forward that is cohesive and united," said Denise Wong, the deputy commissioner of Singapore's Personal Data Protection Commission and a co-chair of the group. 

The working group promises to be diverse, pulling together people from industry and academia, as well as regulators. Within the group will be two working parties, one focused on data protection and another on AI governance. Members who spoke to the IAPP said they see a pressing need for a place where they can talk to each other and exchange ideas.

For instance, Winston Maxwell, director of law and digital studies at Télécom Paris, indicated the need for collaboration was obvious when he noticed his colleagues were writing papers around areas like AI, fairness and privacy. However, the authors did not know anything about the data protection authorities and court cases surrounding those issues.

"The research community is coming up with their own very interesting ideas and approaches to solving AI problems," Maxwell said. "But what struck me is they really didn't know that a lot of those problems have been addressed a different way."

The working group will also help navigate areas where general agreement has been reached, but tricky details remain unsettled. 

"There is a broad agreement that transparency is an important prerequisite for trustworthy AI systems,"  Institute of Electrical and Electronics Engineer Senior Director for European Operations Clara Neppel said. "The question is how to fulfill this requirement without serious implications on privacy."

The group was formed as 71 different countries have AI policies or strategies in place, according to the OECD's database, but only one major binding international regulation exists. The EU AI Act passed the European Parliament 13 March, but it still has to go through a technical vote and the majority of the rules will not be in effect until three years after its entry into force. Like the EU General Data Protection Regulation before it, the AI Act has the potential to set standards for AI policy as the technology continues to advance.

Israeli Privacy Protection Authority Head of Legal Reuven Eidelman said the GDPR is one prominent area of legislation where privacy and AI has already been addressed that the group could highlight. For example, AI stakeholders may look to GDPR Article 22's handling of automated decision-making technology, an area other regulators like the California Privacy Protection Agency are working to build guardrails around.

While there is an importance around rules that balance privacy and AI concerns, the two subjects are not mutually exclusive in every instance. Boundaries need to be in place for work on standalone privacy and AI matters.

"Not all required AI regulation is privacy related, and not all problems that have to do with AI are privacy problems," Eidelman said.

One of the group's primary goals is to define those parameters. There is also a need, according to Maxwell, to identify common terminology between privacy and AI communities and create a working lexicon.

The group's goal is to create a trusted ecosystem for stakeholders to look to when trying to navigate existing laws while pursuing innovation, Wong said. It is the type of work the privacy community is already used to doing when it comes to AI.

"Part of the work of regulators and the community around them is to unpack some of these laws to see where the limits are," she added. "And within those limits, make sure that people can use the data they need in order to innovate."