Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New(?) search idea.

Author: Robert Hyatt

Date: 17:26:40 01/22/98

Go up one level in this thread


On January 22, 1998 at 19:37:09, Andrew Walker wrote:

>On January 22, 1998 at 18:47:24, Robert Hyatt wrote:
>
>>On January 22, 1998 at 14:08:34, Bruce Moreland wrote:
>>
>>>
>>>On January 22, 1998 at 14:00:36, Robert Hyatt wrote:
>>>
>>>>The math part is simple.  The PV takes longer to search than all the
>>>>rest of the moves, in general.  If you have 2 PV's, the math is a
>>>>killer.
>>>>Would be real easy to test, just skip the first move at the front of
>>>>each
>>>>iteration and then come back at the end and pick it up.
>>>
>>>He is searching the successors with a null window, Bob.
>>>
>>
>>doesn't help...  notice that a fail high still takes N times as long as
>>a normal fail low.  And *some* of those root moves may well fail high.
>	My post was meant to extend on the old original alpha/beta
>pruning without any improvements. I'm still not sure I understand
>what failing high and low mean, is failing high when you have proved
>the new move is better but don't have a score? If this is the case then
>the small constant I add to the previous depth score is meant to address
>this.
>If the constant is somthing like 1.0 then the new move is very likely to
>be better than the old move. However something as high as 1.0 may mean
>it's very rarely efective! Thus the need for a comprimise.
>>
>>1.  what initial window can be used?  Old score from last iteration?
>>Can be off significantly with an extra ply added on.  If the initial
>>window is too low, fail highs will be killingly costly.  If it is too
>>low, everything might fail low and there's no useful info.  If the
>>supposedly best move (last one searched) returns a value lower than
>>expected, what do you do about all the others that failed low on that
>>null window?
> In terms of the original alpha/beta it's all in the original post.
>

yes... but this is not particularly close to correct.  IE it is quite
common to see the score vary ply by ply.  If you underestimate or
overestimate on the null-window search you get killed, particularly if
you underestimate...


>>
>>2.  Suppose all moves fail low, including the supposed best one.
>>research
>>them all?
>	If all the alternate moves get cut off early (is this the same as fail
>low?) then we have to search the best move from the previous move fully.
>We haven't gained anything, but shouldn't have lost anything either.


right... but suppose the first (best) move that you wait until last to
search
also fails low.  Then what? redo 'em all?  this is the problem that I
see...
alpha/beta is highly dependent on move ordering to make it efficient.
But
as I mentioned, if you happen to underestimate the value, the fail
high(s)
you get will cost.

BTW, "fail high" means that which happens when you underestimate the
value
of the root moves, say +.32, and you discover that the actual value is
+.7.  You search with some window centered on +.32, but the search
returns
beta, so you relax the upper bound to find out how much better the move
really is...

Failing high costs more than failing low (over-estimating the initial
score
is better than under-estimating.  But getting it *right* is of course
even better...


>>
>>IE I don't know how to make this efficient at all.  Alpha/beta depends
>>on
>>alpha and beta to limit the size of the tree.  This approach seems to
>>need
>>a hearty dose of witchcraft as well... :)
>>
>Perhaps. Don't most programs? :)
>
>Andrew

yep.  :)



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.