Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: negative extensions

Author: Vincent Diepeveen

Date: 20:18:03 01/26/01

Go up one level in this thread


On January 26, 2001 at 01:52:48, David Rasmussen wrote:

>On January 25, 2001 at 19:05:12, Robert Hyatt wrote:
>
>
>>Think about it differently.  IE at ply=N, I play a move and I am
>>about to do a normal search to depth=X to see how this move works.
>>But first, I assume my opponent does nothing, and I then do a much
>>shallower search with me to move again.  If this is bad for me, there
>>is no need for me to search this move to the full depth, I can get away
>>with searching it to the shallower depth, proven by the null-move observation...
>>
>>
>
>It's still different from what I suggested. As Edward said:
>
>>
>>>
>>>applying david's suggestion to a null move implementation would
>>>mean reducing the search depth after a null move failed high
>>>instead of simply returning immediately with a fail high.
>>>
>>>  - edward
>
>My idea is more general.

Why do you waste all those nodes to reduce a single ply,
what you do is simply leading to incorrect search, as
depending upon alfa and beta you write to hashtable that a
depth of search depth = n is having an x score but somewhere in the
tree it actually searched because of a reduction n-1 ply.

The reason why this probably this problem doesn't show much
as if it does in FHR is because if nullmove goes ok then
the whole rest of what you search in this position is no longer
relevant.

so a huge speedup of your program is then to give a cutoff after
nullmove is > beta instead of searching on for the man with the
short name.

Greetings,
Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.