The U.S. National Institute of Standards and Technology explored how machine learning can be exploited through cyberattacks in a new publication. The report highlighted different ways artificial intelligence systems can be attacked and the current mitigation strategies that exist — although it notes current defenses "lack robust assurances that they fully mitigate the risks." Editor's note: Explore the IAPP AI Governance Center and subscribe to the AI Governance Dashboard.
NIST identifies AI cybersecurity vulnerabilities
Related stories
IAPP DPC 2024: Reynders discusses GDPR enforcement harmonization, adequacy developments
US Senate subcommittee ponders accountability for AI-assisted scams
Looming challenges for the EU AI Act's Code of Practice
Tracking evolving policy paradigms in a hallmark year for AI governance
AI and patents: What qualifies as a creator