Thinking as a Hobby

Get Email Updates
Email Me

Admin Password

Remember Me

3476977 Curiosities served
Share on Facebook

Man vs. Machine
Previous Entry :: Next Entry

Read/Post Comments (0)

Blogger Kevin Drum wonders why anyone would give a damn about the Kasaparov/Deep Junior "rematch".

The answer is, they shouldn't too much.

The reason is, the way these chess-playing computers play chess isn't even moderately close to the way humans do. So if one of the main goals of developing game-playing algorithms is to achieve high-level AI, attacking them with brute computing power (such as searching 20 million moves per second, as Deep Blue could) just ain't gonna cut it.

Victories of such programs say more about ever-increasing computing power than they do about our knowledge of cognition (NOTE: the two are not the same).

This is why game domains like Go are so interesting.

Chess is played on an 8 x 8 grid, with 16 pieces already placed on the playing surface. Thus, there are only 10 possible opening moves, and at any given point in the game there are about 20 legal moves to choose from.

Go is played on a 19 x 19 grid, and begins with no pieces on the board. Armies are built, piece by piece, by playing them on the board. Thus, there are 361 possible opening moves for the first player, 360 for the next, and so on. Brute force computing breaks down severely at this point.

Algorithms that work much more like the human brain are required to be successful at Go, using pattern recognition and other associative tasks (at least until we develop petaflop processors). But the point is, if we really want to develop smarter machines, we have to be smarter.

It is innovative architecture that will render smarter machines, not simply brute computing power. So projects like Deep Blue don't really give us any significant insights into how we think, or how to build truly intelligent machines. They're just another benchmark for processing power.

Philip and I are enjoying working with neural nets so far. He's still taking a graduate-level math class devoted specifically to neural net applications, and meanwhile we're building a customized application that incorporates two OpenSource, Java-based applications (JOONE and JGAP) into something that will enable us to evolve neural net architectures (using the NEAT methodology created by researchers at UT Austin) to solve difficult domains (like Go).

We're working under the assumption that difficult domains will require a type of hardware similar to organic neural nets, and that the best way to optimize their organization and function is by using the machinery of evolution.

I'll let you know when we make any progress.

Meanwhile, no...Kasparov vs. Deep Junior isn't really all that relevant. But when a computer beats one of these guys...that will be significant.

Stay tuned.

Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 All rights reserved.
All content rights reserved by the author.