A pair of University of California, Berkeley computer science researchers have developed a method to trick the "DeepSpeech" speech-recognition neural network, Motherboard reports. The researchers use adversarial machine learning to change audio signals, so the network hears a different sound than what was actually produced. The method involves taking an unaltered audio signal and adding a small amount of noise. The addition is just enough to trick the neural network into hearing a completely new sound. "With powerful iterative optimization-based attacks applied completely end-to-end, we are able to turn any audio waveform into any target transcription with 100 percent success by only adding a slight distortion," the researchers said.
If you want to comment on this post, you need to login.