Author: Jay Scott
Date: 07:17:39 05/10/05
Go up one level in this thread
On May 09, 2005 at 22:49:37, Michael Yee wrote: >I came across this interesting/strange paper today: The results don't fully support the claims, which is bad but unfortunately common. And, like other work by David B. Fogel, the paper doesn't show much knowledge of past work and of the state of the art in game playing and game learning. But the paper shows strength in the use of machine learning--chess programmers who are trying these techniques out make a lot of mistakes which are avoided here. I thought the paper was interesting. We can learn something from it. But don't believe everything it says. >(1) started with known good values for material and PSTs (so did in fact >incorporate human knowledge) Avoiding use of human knowledge does not seem to have been a goal. Given that, starting with best-guess values is an effective way of speeding up learning. >(3) had artificial rule that games lasting 50 moves were draws (although I can't >tell if this was just for during training) >(4) fixed search depth of 4 ply (or 6 ply when extending for quiescence), but >maybe only during training? The learning algorithm is extremely slow. Long searches and long games would have wasted a lot of cpu time. >(6) cites Kasparov communication as a reference... That was depressing. :-( Name-dropping for an obvious suggestion. Jay
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.