The top privacy advisor for the U.S. National Institute of Standards and Technology said nontechnical challenges have at times hindered federal agencies' abilities to include privacy-enhancing technologies in their operations.
The White House executive order on artificial intelligence, which will have its one-year anniversary 30 Oct., charged the NIST with identifying ongoing efforts and potential places to add PETs to their protective measures. The order emphasized privacy as a crucial element to guiding the U.S. approach to AI, characterizing it as a key issue in ensuring the technology can be developed and used safely.
NIST Senior Privacy Policy Advisory Naomi Lefkovitz, AIGP, said the agency found that work was at times hampered by knowledge and buy-in gaps from those who were either unfamiliar with PETs or skeptical of their use.
"There's also these issues touching on trust and risk calculations, where we don't know enough without standards to say, 'Hey, is this PET going to perform the way it's intended to,' and not treating a PET like a silver bullet," she said. "And being clear about that so people can make a better risk decision."
The comments, made during the IAPP Privacy. Security. Risk. 2024 conference in Los Angeles, California, underline the challenges around curbing privacy issues raised by AI models trained on vast swaths of data.
The AI executive order also tasked the NIST with developing guidelines on how to evaluate the validity of differential privacy guarantees within one year. Lefkovitz said she expects the agency to hit that deadline.
"What we tried to do is make it more accessible, even to a nontechnical audience," she said.
PETs have been gaining traction among regulators and standard setters for their potential to prevent data reconstruction attacks against AI models. But those agencies also say more use cases need to be discovered to understand the limitations of the technology and develop standards around their use with AI.
Institutions around the world have attempted to evaluate PETs already. Singapore's Personal Data Protection Commission said in proposed guidelines for synthetic data generation that the technology "addresses dataset-related challenges for AI model training, such as insufficient and biased data, through enabling the augmentation and increased diversity of training datasets," but should be subject to risk assessments before adoption. The U.K. Information Commissioner's Office included AI in a use case study around synthetic data. And the Organisation for Economic Co-operation and Development released a working paper studying the efficacy of different types of PETs.
But privacy regulators are unsure whether PETs serve as a way for AI to meet legal requirements or whether their use in building systems would be enough to exclude AI developers from privacy law, said Vance Lockton, CIPP/C, CIPM, the senior technology policy advisor for the Office of the Privacy Commissioner of Canada. He said the hype around AI creates pressure for regulators to find ways to make AI fit with privacy law while not stifling its development.
Not being able to give definitive guidance around PETs while seeing them as a viable option puts regulators in a complicated spot, Lockton said. Pushback from developers who say using PETs would either be too expensive or would create lesser quality data complicates the issue further, he said.
"Our commissioner loves this idea that innovation is going to solve the privacy problems created by innovation," he said. "So PETs are good; I think PETs are going to eventually solve these problems. I think there's still uncertainty or a lack of comfort in how that can be done."
But regulators can build that certainty by providing test beds for PETs to prove their reliability, Lockton said, pointing to Singapore's PET sandbox and Japan's privacy technology-certification program as examples.
"That's something I was always encourage organizations to engage in, to the extent that regulators are making those available," he said. "Try to pull it apart and make us part of your red team. ... I think there's going to be this continued drive toward adoption of PETs but there's going to have to be a bit of that first step to say, 'We needs to be very sure about what these things are going and what their limitations are.'"
Caitlin Andrews is a staff writer covering AI for the IAPP.