California recently passed Senate Bill 1001, which bars companies and people from using bots that intentionally mislead those they are talking to into thinking they are human.
Putting aside the social and legal merits of this law, Bill 1001 implicates a hitherto-abstract, philosophical debate about when a simulation of intelligence crosses the line into sentience and becomes a true AI. Depending upon where you draw the line, this law is either discrimination against another form of sentient life or a timely remedy intended to protect users from exploitation by malevolent actors using human speech simulators.
Alan Turing — the father of artificial intelligence but better known to the public for his role in breaking the German naval codes during World War II — foresaw the implications of his theories, which are still foundational to computer science, and was the first to enter this debate. He proposed his eponymous Turing test for artificial intelligence in 1950. To take the test, a human evaluator is placed into a chatroom with both a bot and another human, both carrying on a conversation with him. If the evaluator cannot distinguish between the human and the bot, the test is passed.
In Turing’s view, since such an AI would be empirically the same as a human being, it should be recognized as sentient, equally intelligent as a human being. Under the Turing test, some AIs we have now should be considered sentient. As technology advanced, several thinkers challenged the Turing test as naïve, arguing that it was possible to simulate sentience without achieving it.
The best-known example is the Chinese room, a thought experiment proposed by John Searle, philosophy professor emeritus at Berkley, in 1980. The hypothetical is set up as such: Imagine an English-speaking man was placed in a room with a book of Mandarin characters that maps every possible set of characters with one appropriate output, also in Chinese. The book contains no translations and only has instructions in English. If a letter with Chinese characters were passed under the door, the person inside the room could take the characters, look them up in the book, and pass back an appropriate response in Chinese. To anyone outside of the room, the person inside speaks fluent Mandarin, but he is merely following the instructions in his book, entirely ignorant of any meaning in the conversation. For all he knows, the characters on the page could be an invitation to a party, a declaration of friendship, a profanity, an angry rejection, a weather report, or even a repressed fact of history. His mastery of Chinese is only illusory, a simulation.
In Searle’s view, this man does not speak Mandarin any more than a computer has human sentience.
For a bot to violate the California law, it needs to be effective enough at fooling a human being to pass the Turing test, since an obvious bot would not be able to “mislead…about its artificial identity.” Such bots would qualify as sentient under the Turing test and someone holding Turing’s view would likely see this law as discrimination.
On the other hand, automated language generation is usually performed using a term-document matrix, which looks at the previous word, or the provided sequence of text, and uses a library of probabilities to formulate a response on word at a time. It is functionally very similar to Searle’s Chinese room, and under the logic of the Chinese room, there is no sentience present in the algorithm to discriminate against.
Regardless of which view is true, it is undeniable that automated generation of natural speech will only become more and more "undifferentiable" from the genuine article produced by genuine humans.
Moore’s Law tells us that the rate of technological progress, including chatbots, is increasing. Natural-text generating chatbots, such as Mitsuku, have been around for years and are constantly improving. Although it may not be in the next decade, it is almost certain that we will see unfailingly Turing-passable AIs in our lifetimes.
With such an advent, we will have to begin grappling with difficult, metaphysical questions about sentience, the soul, and what it really means to be human. Laws like California Senate Bill 1001, when eventually applied to a purportedly sentient AI, will give that debate tangible stakes.
photo credit: gerlos BB8 and its shadow via photopin