The U.S. National Institute of Standards and Technology explored how machine learning can be exploited through cyberattacks in a new publication. The report highlighted different ways artificial intelligence systems can be attacked and the current mitigation strategies that exist — although it notes current defenses "lack robust assurances that they fully mitigate the risks." Editor's note: Explore the IAPP AI Governance Center and subscribe to the AI Governance Dashboard.
NIST identifies AI cybersecurity vulnerabilities
RELATED STORIES
What to know about the new Canadian government PIA standard
Notes from the IAPP Canada: 'The Debaters' battle it out over privacy
A view from DC: What does a second Trump presidency mean for privacy, AI governance?
A view from Brussels: Global Privacy Assembly explores the 'Power of i'
What the US election results may mean for digital policy