Author: Tord Romstad
Date: 05:36:03 01/30/04
Go up one level in this thread
On January 30, 2004 at 04:12:50, Tim Foden wrote: >When I was in Graz I had a quick look at the books that were on sale. One of >them had the idea of "Fail High Reductions". It was in one of the books of >collected papers, possibly of the AGC, but a few years old now. > >I scribbled a few quick notes down... > > Use evaluation to deduce threat value t > > In search, if > > eval - t >= beta && alpha + 1 == beta > > then > > reduce depth by 1 > >... (I said they were quick notes didn't I? :)) > >It's not something I've yet tried in GLC as GLC doesn't evaluate at each >internal node, and it doesn't have an very good idea of the value of a threat >either, but the paper seemed to think it worked quite well. Thanks, Tim. I tried fail high reductions about half a year ago. They didn't seem to do much harm, but also not much good, and in the end I discarded them in the interest of simplicity. When my engine plays equally well with and without some trick, I always prefer the simplest version. Besides, fail high reductions, like nullmove pruning and all other techniques which are based on the values of alpha and beta, suffers from the problem of introducing search inconsistencies. You often end up searching the same node again with a different value of alpha and/or beta, which makes the scores in your hash table unreliable. Tord
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.