Over the weekend researchers claimed that the computer program Eugene Goostman had passed an artificial intelligence milestone by passing the Turing Test. The news was exciting because of the test’s outsized role in the public’s imagination, but opinion is already turning against the researchers for making such a bold claim.
At the Turing Test 2014, organized by the University of Reading, Eugene was able to convince 33% of judges that it was a 13-year-old boy from Ukraine instead of a computer program. Researchers say that passing the 30% threshold is a passing grade, apparently based on a misreading of Alan Turing’s paper Computing Machinery and Intelligence, but even this is just a nitpick. If an additional 17% of the judge pool had guessed the wrong way, we wouldn’t be any closer to declaring a piece of software self-aware.
Turing Test avoids difficult questions of intelligence and consciousness
Turing, arguably the most important thinker in the history of computing, originally proposed the Imitation Game as a way to avoid having to define consciousness. It isn’t enough to know that neurons are firing and axons being stimulated to explain why consciousness emerges, so there’s no reason to assume that tracking 0s and 1s will help us identify that computers have become intelligent (if that ever happens). In both cases, we don’t have any contact with the other’s inner life – we assume that it’s there because of how it manifests.
“The only way to know that a man thinks is to be that particular man,” Turing writes. “Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.”
In proposing his Imitation Test, Turing is saying that we should extend the same courtesy to computers. If, at some point, we see the manifestation of an inner life that is indistinguishable from that of a person, we should assume that the computer is intelligent.
Eugene relies on obfuscation and bluffing instead of conversation
By framing the Eugene program as a young non-native speaker, the researchers were simply gaming the system (giving judges a reason to dismiss odd expressions poor grammar and the like), and most researchers that have tried to pass the Turing Test in the past have done so with chatbot strategies that rely on obfuscation and bluffing instead of active engagement. Turing didn’t add any rules about approaching the test in good faith, but we can.
Public versions of Eugene Goostman mostly seem to be down right now with the flood of public interest, but you can check a recent conversation between Scott Aaronson and Eugene here. This isn’t intelligence, no matter how many people fell for it.
TechDirt claims that the computer did not pass the Turning test, see that story here.