Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478597 Curiosities served
Share on Facebook

Supercomputer Beats Go Master (Sort Of)
Previous Entry :: Next Entry

Read/Post Comments (0)

So I see the headline "Supercomputer Beats Go Master" via Intelligent Machines. This is a topic that's near and dear to me. My adviser steered me away from game AI my first year in the program toward more neuroscience-oriented research, which was probably a good idea. But I've always been fascinated with the problem of computer Go (I won't go into all the interesting issues here...have a look at the article).

Anyway, it turns out that the article is referring to a match between MoGo and 5th DAN Catalin Taranu (DAN is a professional rank, which goes from 1 DAN to 9 DAN). However, the game is on a 9x9 board. Go is played on a board with square dimensions, and smaller boards are often used to teach the fundamentals of the game. A 9x9 game can still be interesting, but it is a far cry from a full 19x19 game.

Also, the computer player, MoGo, uses Monte Carlo methods:


One major alternative to using hand-coded knowledge and searches is the use of Monte-Carlo methods. This is done by generating a list of potential moves, and for each move playing out thousands of games at random on the resulting board. The move which leads to the best set of random games for the current player is chosen as the best move. The advantage of this technique is that it requires very little domain knowledge or expert input, the tradeoff being increased memory and processor requirements. However, because the moves used for evaluation are generated at random it is possible that a move which would be excellent except for one specific opponent response would be mistakenly evaluated as a good move. The result of this are programs which are strong in an overall strategic sense, but are weak tactically. This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution.


MoGo apparently uses and extension of this technique called upper confidence bounds applied to trees (UCT), which uses previous play outs to narrow the search for potential random moves.

The thing that is disappointing to me is that this sounds like a variation on standard brute force computing to solve problems, an old, traditional approach to AI. They compare this victory to the Deep Blue/Kasparov matchup, and rightly so. That was a case of superior processing speed and relatively blind search triumphing over humans. Both victories say more about increases in computing power than they do about our knowledge of how to solve domain-general cognitive tasks.


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com