The U.S. National Institute of Standards and Technology explored how machine learning can be exploited through cyberattacks in a new publication. The report highlighted different ways artificial intelligence systems can be attacked and the current mitigation strategies that exist — although it notes current defenses "lack robust assurances that they fully mitigate the risks." Editor's note: Explore the IAPP AI Governance Center and subscribe to the AI Governance Dashboard.
NIST identifies AI cybersecurity vulnerabilities
RELATED STORIES
A view from DC: The growing reckoning over location data
Notes from the IAPP Canada: CRA breach a 'cautionary tale'
Notes from the IAPP Europe: October wrap-up
Council of Europe's Framework Convention on AI and its global implications
Notes from the Asia-Pacific region: Amid festive backdrop, Singapore unveils secure AI guidelines