Author: Ulrich Tuerke
Date: 03:54:24 12/09/03
Go up one level in this thread
On December 08, 2003 at 16:19:48, Tim Foden wrote: >On December 08, 2003 at 11:02:22, Alvaro Jose Povoa Cardoso wrote: > >> >>I've been thinking about the efficiency of the history heuristic at high search >>depths. >>It seems to me that the history table will be overwritten many times if we have >>a search of several billions of nodes. Additionally, as the search moves to >>different parts of the tree the history table values will be somewhat trashed. >>What do you think we could do about this? >>Maybe limit the history heuristic to a certain depth (ex: the nominal depth). >> >>Comments anyone? > >What happens in GLC is whenever it increments a value in the history table it >checks it against a maximum. If the maximum value is exceeded, it divides all >values in the table by 2. Isn't this a bit expensive ? I guess that this may happen several times per iteration, and perhaps very often at higher iterations. I do it like this if (*hisptr < maximum) *hisptr += ((depth*32) >> iteration_no); "depth is the distance of the current ply to horizon, so this measures in some way the size of the sub-tree which had been cut. i.e. the increment is smaller at higher iterations, because at higher iterations I expect the history to be filled much faster. At start of a new iteration, I divide all entries by 2. However, I'm not sure how well this works. Uli > >Cheers, Tim.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.