Researchers at Sungkyunkwan University in South Korea said commonly used deepfake-generating methods can fool facial recognition services, VentureBeat reports. Using artificial intelligence models trained on five datasets containing faces of celebrities and politicians, the researchers created 8,119 manipulated videos and said application programming interfaces from Microsoft and Amazon were susceptible to attacks. “We believe our research findings can shed light on better designing robust web-based APIs, as well as appropriate defense mechanisms, which are urgently needed to fight against malicious use of deepfakes,” the researchers said.
8 March 2021
Study warns deepfakes can fool facial recognition services
Related stories
Luminos.AI, ZwillGen partner on AI law platform to help scale common governance practices
Notes from the IAPP Canada: Are good intentions enough to stay on top of potential data breaches?
A view from DC: Can consumer protection enforcement unduly burden AI innovation?
CPPA Board finalizes long-awaited ADMT, cyber audit, risk assessment rules
A view from Brussels: Will the Data Union Strategy finally lead to the European data single market?