Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3477073 Curiosities served
Share on Facebook

Salon on Turing and AI
Previous Entry :: Next Entry

Read/Post Comments (8)

Catch it while it's still around...

Salon has an interesting overview of the "Loebner Prize", a half-baked contest meant to find a winner for the Turing Test.



To win the Loebner competition, software programs must mimic human conversation. Such programs are known as "chatting robots" or, more often, "chatterbots" or simply "bots." But today's academic A.I. researchers consider the chatterbot approach simpleminded. The Loebner competition, they argue, isn't a real measure of progress in artificial intelligence but merely a "bot beauty contest." To mainstream researchers, Loebner is a self-aggrandizing fool and his contest is hokum: at best irrelevant and at worst a public disservice that encourages bad science.



Sounds like it to me. I'd heard of the Loebner Prize, but this article does a good job of talking about the guy behind it, as well as giving a good overview of Turing, AI, and attempts at building natural language processors for the past 50 years.



Although the challenge was at first embraced by the academic A.I. community, passing the Turing test -- which proved to be a rather more difficult nut to crack than some prominent A.I. people had said it would be -- has long since fallen out of fashion as a legitimate goal or benchmark among "real" A.I. researchers. The A.I. establishment has for more than a decade put more energy into explaining why the Turing test is irrelevant than it has into passing it.



Yup, that's about right. When you get right down to it, the Turing Test is really pretty silly. There is still rich debate regarding what intelligence even is, though many discuss a wide range of psychological attributes such as problem solving, spatial awareness, ability to acquire and apply new knowledge, and so on.

The fundamental attributes required to pass the Turing Test are natural language ability and a propensity for deception (after all, the point is to trick a human judge).

And what about the contest itself?



When Hugh Loebner created the Loebner Prize in 1989 to spur progress toward technology that could pass the Turing test, the A.I. establishment welcomed him with open arms. Under the aegis of the Cambridge Center, a blue-ribbon panel chaired by cognitive scientist and author Daniel Dennett and composed of a who's who of computer scientists from leading institutions, organized the first contest and wrote the first rules. When, after two years of planning, the first event was held in 1991 at Boston's Computer Museum, it was a gala affair partially underwritten by the National Science Foundation and Sloan Foundation.

...

There was only one problem: the A.I. programs entered in the contest performed horribly. They were pathetic. The mountain, as it were, had gone into labor and given birth to a mouse.



Well, yeah.

True natural language processing is near the tip of the AI pyramid. That's because being able to intelligably use language relies on sensory perception and interaction with the real world (or a suitably complex virtual environment). Words are arbitrary symbols used as shorthand for real-world concepts.

That is, the word "horse" has to have a real-world referent for it to have any meaning at all (see Searle's Chinese Room).

It's not enough for the word "horse" to associated with a bunch of strict grammatical rules and associations with other symbols in a complicated database. The crucial linkage is between the word and its real-world counterpart. Thus, "horse" has little or no meaning to you if you cannot associate it with something real.

This is why most of the fundamental research in AI right now has to center on building the tools to allow machines to interact with their environments in a meaningful way. That means we have to build better eyes, ears (or sonar sensors), arms and other manipulators, and locomotors first.

Just as the biological brain could not evolve without a biological chassis to house it, artificial minds will not be able to do the same. The "brain-in-a-box" is an AI fantasy. For something to be considered intelligent, it's going to have to be able to interact, in a rich and meaningful way, with its environment.


Read/Post Comments (8)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com