Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Thinker 4.6b third after 1st round!

Author: José Carlos

Date: 07:20:22 06/01/04

Go up one level in this thread


On June 01, 2004 at 09:58:47, Uri Blass wrote:

>On June 01, 2004 at 02:49:55, José Carlos wrote:
>
>>On June 01, 2004 at 02:33:18, Sune Fischer wrote:
>>
>>>On May 31, 2004 at 20:06:37, Robert Hyatt wrote:
>>>>
>>>>I don't understand all this "fiddling".  IE oddball books.  ponder=on vs
>>>>ponder=off, endgame tables on, endgame tables off.  Learning on.  Learning off.
>>>>Etc.
>>>>
>>>>I would have no objection if someone plays a long match, crafty vs program S,
>>>>then clears the learning data and plays a long match crafty vs program T.  But
>>>>not disabling learning completely.  Then I _know_ the book will cause a
>>>>problem...  Because it isn't hand-tuned whatsoever...
>>>
>>>I don't see what is so interesting in trying to win the same games over and
>>>over. That kind of book cooking hasn't got very much to do with smarts of the
>>>engine, IMO.
>>>
>>>Most programmers are interested in real algorithmic progress, not in whether
>>>they can win every game just by getting the same couple of completely won
>>>positions out of the book.
>>
>>
>>  Book learning, as well as any other kind of learning, is a nice algorithmic
>>exercise. It takes time to develope and fine tuning. Disabling it is telling the
>>programmer "you wasted your spare time".
>
>
>I think that for me the main problem was to write a program that has book based
>on pairs of position and move(the public movei has book based on games when it
>did not do binary search to find the position).
>
>Now when it seems to work I guess that book learning to avoid losing the same
>line twice seems to be an easier task.
>
>I can have learning score for every move in book.
>If the program is losing I plan to reduce the learning score for every move in
>book by learning_lose[ply]
>If the program is winning I plan to increase the learning score by
>learning_win[ply] when the program always plays a book move with the highest
>learning score.
>
>The idea is that the arrays learning_lose[ply] and learning_win[ply]  be edited
>by the user but I think that even the simple case of learning_lose[ply]=1 and
>learning_win[ply]=0 can prevent losing the same game twice because the starting
>learning values are 0 and after losing a game with 1.e4 you 1.e4 with -1
>learning value and other moves with 0 learning value so 1.e4 will not be the
>choice of the program unless it lose also with all other opening moves.
>
>Uri

  I do something similar, but I also use the result of the search and some other
information less important. I have a parameter that tells the program when to
throw the move out of the book (for example, at -25), and another program to
give an interval of randomness (for example +-5). Then I generate the moves,
check which of them are in the book, get the learned score and do score[move] =
learned_score + rand(2 * 5) - 5 (or something like that, I don't have the code
here). Then I pick the best and make the move. I also store the learned_score in
the hash table (I remember time and depth) and, in case of no move to make, if I
have for example 1.e4 with a score of -30 (last search to ply 13), the program
gets quickly to depth 13 and keeps searching for a different move or either
finds a better move at lower depth. This way, Averno adds and eliminates
positions from the book to try to maximize results in the long run.
  It's a bit more complex, but that's the idea.

  José C.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.