A program run by the U.S. Department of Defense has developed a new tool for catching fake videos created with artificial intelligence, MIT Technology Review reports. Also known as “deepfakes,” augmented video often involves generative modeling, a machine-learning technique that allows a computer to learn from real data before producing realistic fake examples. The U.S. Defense Advanced Research Projects Agency’s Media Forensics program focused on video forgery by detecting subtle clues, such as how images mimic certain human characteristics. For now, the tool is able to catch missing movements, but the researchers note that the machine learning supporting the fake videos can be trained to outsmart forensics tools, which will likely lead to a "deepfakes" arms race. (Registration may be required to access this story.)
If you want to comment on this post, you need to login.