Artificial intelligence scientists have raised concerns about the information retained by neural networks, ZDNet reports. Google Brain Research Scientist Nicholas Carlini published a paper in which he and his colleagues discuss how neural networks may retain portions of a dataset used to train it to generate text. The paper poses malicious entities could infiltrate a network for sensitive data, such as credit card and Social Security numbers. "Ideally, even if the training data contained rare-but-sensitive information about some individual users, the neural network would not memorize this information and would never emit it as a sentence completion," the paper states. "Unfortunately, we show that training of neural networks can cause exactly this to occur unless great care is taken."
If you want to comment on this post, you need to login.