A reported shift toward artificial intelligence automation in Meta's risk assessment program sparked questions around trickledown impacts on the privacy and digital responsibility profession. According to Meta, the planned changes have nothing to do with replacing human reviews and focus more on building out systems to handle the growing digital governance ecosystem.
Meta Vice President of Policy and Deputy Chief Privacy Officer Rob Sherman joined IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP/US, for a LinkedIn Live to discuss what Sherman described as "unfortunate mischaracterizations" made by NPR regarding planned risk assessment changes.
NPR said it obtained internal Meta documents showing the company's intention to automate 90% of its assessment work through the application of AI. The infusion of AI would reportedly extend to Meta's algorithm, safety features and changes to how content is shared.
Sherman said the reporting generated an understandable wave of reactions from the public and Meta employees, but he indicated Meta is fully committed to keeping risk assessments under human control.
"Just to be very clear, AI is not making risk decisions at Meta," Sherman said. "Humans are still very much in control of the process and driving the process.
While the human element of risk reviews is intact, Meta is indeed planning to leverage AI to streamline aspects of the process.
Sherman confirmed some of NPR's reporting about how AI is being implemented into reviews. The internal documents indicated product teams will submit a questionnaire to receive an AI-driven "instant decision" identifying risks and potential mitigation measures that must be addressed before a launch.
"We tried to look at the decisions we were making and figure out what parts we can write down ahead of time ... and have a system automate it rather than having people remember to do things," Sherman said. As an example, he pointed to a hypothetical pre-determined "human-written" set of automated data deletion rules a new product can be put through before reaching a final human review.
The new process will initially apply in the areas of privacy, AI governance, youth protection, and safety and security. Sherman anticipates further expansion to accessibility, intellectual property and copyright as time goes on.
The streamlining is targeted for assessment areas that do not require "nuanced discussion about what's the right way to do it," according to Sherman. Issues flagged during an automated review — including areas that are unaddressed in automation — are sent for human intervention and assessment.
The "evolution" toward automation is twofold.
On one side, Sherman said Meta views AI "at the forefront of what we are doing as a company" and leveraging it will "help speed up processes." On the other hand, the process changes acknowledge the need to broaden work to include an all-encompassing view of digital governance needs.
"We are a technology company. Our core bread and butter is not building governance programs but building technology," Sherman said. "The real question was how can we use our skill within technology to solve some of these things and help make the risk review process more effective."
Meta's privacy governance initiatives represent the foundation for the digital governance endeavor.
Sherman said privacy reviews are "the backbone of how we make decisions and build." Privacy alone "continues to be really important" to the company, he said, but the way privacy principles address risks offered a "natural" opportunity to expand the reach to other digital domains and aim for a "one-stop shop."
"When you're thinking about privacy, that implicates issues of AI governance, youth protection. There is a give and take between those," Sherman said. "Thinking about those things holistically is important if you want whatever your company is building to be a net-positive for the world, you need to think holistically."
Joe Duball is the news editor for the IAPP.