2014-06-09

The Turing test and Artificial Stupidity

Every news outlet is currently covering the story that a chatbot pretending to be a 13-year old Ukranian boy has deceived 33% of human judges into thinking it is a human, thereby "passing the Turing test for the first time".

There are so many problems with the Turing test (even with the numerous refinements to it that many have proposed) that I don't know if it will ever tell us anything useful. The creators of the above chatbot hinted that part of their success in convincing the judges was that “his age ... makes it perfectly reasonable that he doesn’t know everything” -- in other words, to make a believable bot, you can't give your bot super-human knowledge or capabilities, even if this is technically possible to do (e.g. computers can multiply large numbers almost instantly). Limiting computational power to appear human-like is known as "artificial stupidity". The need for artificial stupidity to pass the Turing test illustrates one of the deepest issues with the test, and one that cannot be fixed by simply tweaking the rules: the Turing test is a test of human dupe-ability, not of machine intelligence.

I'm pretty sure we'll start seeing several claims per year that a bot has "passed the Turing test", followed by a flurry of discussion about what was actually tested and whether the result is believable or even meaningful, until it becomes so cliche'd to say that your bot passed the Turing test that nobody with a halfway decent AI would actually *want* to claim that their AI passed a test of this form.

Hopefully we see the day when the Turing test is inverted, and we realize we need a test to establish that someone is a "genuine human" and not a bot ;-)  But until then, we still have a heck of a lot of work to do!