Author: Uri Blass
Date: 06:58:47 06/01/04
Go up one level in this thread
On June 01, 2004 at 02:49:55, José Carlos wrote: >On June 01, 2004 at 02:33:18, Sune Fischer wrote: > >>On May 31, 2004 at 20:06:37, Robert Hyatt wrote: >>> >>>I don't understand all this "fiddling". IE oddball books. ponder=on vs >>>ponder=off, endgame tables on, endgame tables off. Learning on. Learning off. >>>Etc. >>> >>>I would have no objection if someone plays a long match, crafty vs program S, >>>then clears the learning data and plays a long match crafty vs program T. But >>>not disabling learning completely. Then I _know_ the book will cause a >>>problem... Because it isn't hand-tuned whatsoever... >> >>I don't see what is so interesting in trying to win the same games over and >>over. That kind of book cooking hasn't got very much to do with smarts of the >>engine, IMO. >> >>Most programmers are interested in real algorithmic progress, not in whether >>they can win every game just by getting the same couple of completely won >>positions out of the book. > > > Book learning, as well as any other kind of learning, is a nice algorithmic >exercise. It takes time to develope and fine tuning. Disabling it is telling the >programmer "you wasted your spare time". I think that for me the main problem was to write a program that has book based on pairs of position and move(the public movei has book based on games when it did not do binary search to find the position). Now when it seems to work I guess that book learning to avoid losing the same line twice seems to be an easier task. I can have learning score for every move in book. If the program is losing I plan to reduce the learning score for every move in book by learning_lose[ply] If the program is winning I plan to increase the learning score by learning_win[ply] when the program always plays a book move with the highest learning score. The idea is that the arrays learning_lose[ply] and learning_win[ply] be edited by the user but I think that even the simple case of learning_lose[ply]=1 and learning_win[ply]=0 can prevent losing the same game twice because the starting learning values are 0 and after losing a game with 1.e4 you 1.e4 with -1 learning value and other moves with 0 learning value so 1.e4 will not be the choice of the program unless it lose also with all other opening moves. Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.