Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New(?) search idea.

Author: Andrew Walker

Date: 16:32:57 01/26/98

Go up one level in this thread


On January 22, 1998 at 20:26:40, Robert Hyatt wrote:

>On January 22, 1998 at 19:37:09, Andrew Walker wrote:
>
>>On January 22, 1998 at 18:47:24, Robert Hyatt wrote:
>>
>>>On January 22, 1998 at 14:08:34, Bruce Moreland wrote:
>>>
>>>>
>>>>On January 22, 1998 at 14:00:36, Robert Hyatt wrote:
>>>>
>>>>>The math part is simple.  The PV takes longer to search than all the
>>>>>rest of the moves, in general.  If you have 2 PV's, the math is a
>>>>>killer.
>>>>>Would be real easy to test, just skip the first move at the front of
>>>>>each
>>>>>iteration and then come back at the end and pick it up.
>>>>
>>>>He is searching the successors with a null window, Bob.
>>>>
>>>
>>>doesn't help...  notice that a fail high still takes N times as long as
>>>a normal fail low.  And *some* of those root moves may well fail high.
>>	My post was meant to extend on the old original alpha/beta
>>pruning without any improvements. I'm still not sure I understand
>>what failing high and low mean, is failing high when you have proved
>>the new move is better but don't have a score? If this is the case then
>>the small constant I add to the previous depth score is meant to address
>>this.
>>If the constant is somthing like 1.0 then the new move is very likely to
>>be better than the old move. However something as high as 1.0 may mean
>>it's very rarely efective! Thus the need for a comprimise.
>>>
>>>1.  what initial window can be used?  Old score from last iteration?
>>>Can be off significantly with an extra ply added on.  If the initial
>>>window is too low, fail highs will be killingly costly.  If it is too
>>>low, everything might fail low and there's no useful info.  If the
>>>supposedly best move (last one searched) returns a value lower than
>>>expected, what do you do about all the others that failed low on that
>>>null window?
>> In terms of the original alpha/beta it's all in the original post.
>>
>
>yes... but this is not particularly close to correct.  IE it is quite
>common to see the score vary ply by ply.  If you underestimate or
>overestimate on the null-window search you get killed, particularly if
>you underestimate...
>
>
>>>
>>>2.  Suppose all moves fail low, including the supposed best one.
>>>research
>>>them all?
>>	If all the alternate moves get cut off early (is this the same as fail
>>low?) then we have to search the best move from the previous move fully.
>>We haven't gained anything, but shouldn't have lost anything either.
>
>
>right... but suppose the first (best) move that you wait until last to
>search
>also fails low.  Then what? redo 'em all?  this is the problem that I
>see...
>alpha/beta is highly dependent on move ordering to make it efficient.
>But
>as I mentioned, if you happen to underestimate the value, the fail
>high(s)
>you get will cost.
>
>BTW, "fail high" means that which happens when you underestimate the
>value
>of the root moves, say +.32, and you discover that the actual value is
>+.7.  You search with some window centered on +.32, but the search
>returns
>beta, so you relax the upper bound to find out how much better the move
>really is...
>
>Failing high costs more than failing low (over-estimating the initial
>score
>is better than under-estimating.  But getting it *right* is of course
>even better...
>
	Just to recap and save referring to my early post I'll repeat the
method. Basically at a given depth we take the score from the previous
depth
and add a small positive constant. We use this as a cutoff value and
	1) Search all alternate moves using this cutoff
	2) Search the primary move. If all the moves in 1) cutoff we need
	   to search it fully, otherwise a cutoff value given by the best
	   score from 1) is used.
	3) Research the alternate moves if necessary. This only occurs all the
	   moves from 1) cutoff **AND** the new value of the primary move from
		           part 2) is less than the cutoff value used in 1).

The problem people have pointed out is the need to research moves in
part 3)
and the extra time it uses. Pretend that white is to move. When I first
thought of this method I thought that the results from part 1) would be
useful if the
alternate moves need researching. ie We have a list of white moves and
black responses with scores. For some white moves no more searching will
be necessary
eg if a black response forces a mate. For others more searching may be
necessary
as we will be using a lower cutoff value. If we can do this, we don't
lose time
as the number of black responses searched in total should be equivalent
to
the normal case of searching the primary move first. The question is,
can this be done when windows are being used? I don't know.  We will at
least have some
information which has been stored in the hash table. Even so since all
the moves
were cutoff early in 1) this should have been quick, and the researching
should
hopefully be quick as well as we have the move ordering.
	As you have said the score values vary from ply to ply. For Crafty this
would be normally less than say 0.2 pawns? The idea of the positive
constant
is to try and overcome this. If the constant had a value of two pawns,
then
whenever one of the alternate moves is not cut off in 1), it is
extremely likely
that the primary move will cutoff in 2) resulting in a saving of time.
However
in a game such large improvements probably don't occur to often (except
against humans and weak computers :) ) so this time saving may be lost
to any
time used researching alternate moves.
	A constant of 0.3 to 0.5 is possibly more realistic, however this would
need to be tested. Even if the time taken overall is greater, this
method may result in one or more significantly better moves being
played, is this sufficient? All of this would of course have to be
tested.
	One benefit that is definitely real would be in problems. In these a
large jump usually occurs in value when a forced win or win of material
is found, so even if more time is used at lower plys, at the final ply
time will be saved which should result in a quicker solution time!
	Anyway it would be interesting to here more on the issue of the time
used in researching the alternates.
	Andrew Walker



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.