The U.S. National Institute of Standards and Technology explored how machine learning can be exploited through cyberattacks in a new publication. The report highlighted different ways artificial intelligence systems can be attacked and the current mitigation strategies that exist — although it notes current defenses "lack robust assurances that they fully mitigate the risks." Editor's note: Explore the IAPP AI Governance Center and subscribe to the AI Governance Dashboard.
NIST identifies AI cybersecurity vulnerabilities
RELATED STORIES
A view from DC: The beginning of the end of the free flow of data
Notes from the IAPP Canada: Privacy community is active, why aren't politicians?
A view from Brussels: Are IT leaders seeing it as well?
UK Data Use and Access Bill offers familiar proposals, eye toward EU adequacy
Top operational impacts of reforms to Australia's Privacy Act