Chatbot's Passing of the Turing Test Up For Debate
Tech blogs and news outlets are backpedaling a bit after publishing a frenzy of reports claiming that a "supercomputer" had passed the famous Turing Test last week. The test, adapted from the writings of computer science legend Alan Turing, is generally intended to determine whether or not a computer can exhibit behavior that is indistinguishable from that of a human.
Reports from all over the web, including PC World, NBC News, Ars Technica and many more, stated that a "supercomputer" known as Eugene Goostman fooled users into thinking it was human.
Flaws in the Test
While the outcome was originally hailed as a "milestone," it's not exactly the artificial intelligence victory that the media seemed to think it was. According to the original press release from the University of Reading, the Turing Test is passed when a computer is mistaken for a human over 30 percent of the time. The allegedly successful computer, known as Eugene Goostman, is apparently the first computer program to convince more than 30 percent of its human judges that it was indeed human.
Unfortunately, many problems arise when evaluating these statements. First of all, while Eugene was originally reported to be a "supercomputer" (the terminology has since been edited to "computer progamme" in the press release), the software is actually a chatbot similar to Cleverbot, which is capable of carrying on a very realistic typed conversation with a human user. While this is impressive, it certainly does not mean that Eugene has anything resembling the intelligence that Turing had in mind - it is simply a script that can mimic human conversation. Like Watson and Deep Blue before it, it can be seen as software that is exceptionally good at one particular task, rather than being capable of learning new things or reasoning as a human child could.
Also, it could be argued that Eugene gamed the system by claiming to be a 13-year-old boy from Ukraine, thus steering human judges to be more forgiving to poor grammar or illogical answers.
Does the Turing Test Even Matter?
Despite being a great jumping point for philosophical conversation, the test itself has often been seen as frivolous or inconsequential in the world of artificial intelligence. Indeed, the "rules" for the test have undergone many changes over the years, and the current test is actually quite different from Turing's original. Put simply, a program could technically pass the Turning Test by fooling human users (as was done in this case), but this certainly does not indicate that the program is intelligent in any sense whatsoever.
A Lesson in Creating Good Content?
According to Techdirt.com, one of the biggest red flags in this story is the man who organized the test, Kevin Warwick. Apparently, Warwick is known for creating media blitzes with bogus tech stories such as claiming to be the first cyborg in history and stating that his lab had infected a human with a computer virus. The Techdirt.com article states that just a little bit of research into Warwick could have prevented the unwarranted media attention.
This is certainly not the first time major news outlets and blogs have reported a story that turned out to be completely overblown, nor will it be the last. At the same time, it could be seen as a warning to webmasters starting to operate blogs dealing with current news. Is it more important to you to have thoroughly researched and well-reasoned content, or is it better to make sure your blog has up-to-the minute discussions of trending stories regardless of their validity? While the ideal answer might lie somewhere in the middle, the strategy that you take ultimately deals with long-term goals. The latter is likely great for traffic, but the former could help establish a solid reputation.