Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Repeatability (questions for Omid)

Author: Robert Hyatt

Date: 15:02:57 12/19/02

Go up one level in this thread


On December 19, 2002 at 16:30:25, Bruce Moreland wrote:

>On December 19, 2002 at 11:24:44, Robert Hyatt wrote:
>
>>>1) When the null-move search comes back with fail high, and verify is true, I
>>>will do a regular search with reduced depth.  If this search fails high, I cut
>>>off like normal.  If this search does not fail high, I have to re-search with
>>>the original depth.  What I don't understand is what I do if the initial reduced
>>>depth search modified alpha.  I am assuming that I put alpha back the way it was
>>>and start over.
>>
>>That is how I did it, yes..
>>
>>
>>>
>>>2) I don't know if this implementation allows two consecutive null moves or
>>>what.  In specific, I don't know what "null_ok()" does.  I am assuming that if I
>>>don't allow two null moves in a row already, I can continue to not allow two
>>>null moves in a row.
>>
>>
>>I did it just like my normal program.  No consecutive null-move searches, and
>>I did continue to use the "avoid-null" hash table trick as well.
>>
>>
>>
>>>
>>>3) I am assuming that if the search cuts off when doing the reduced depth
>>>searches, that the depth record in the hash table should be the original depth,
>>>and not the reduced depth.
>>
>>That is what I did, although it seems that it might be a problematic
>>assumption that I didn't give much thought to since the code already worked
>>that way normally...
>
>I think you are thinking about a different area.  In Omid's pseudo-code, he
>takes the main "depth" variable and decrements it.
>
>If he's off doing a search, and fails high, he has to record hash for this node
>and cut off.
>
>The depth value has been decremented, so unless he puts it back, he's going to
>store a "9" in the hash table even though he entered this node with a "10".
>
>In my implementation I never decrement depth, so I don't have this problem.
>What I do instead is remember that I'm in this state, so rather than passing
>"depth - 1" to the recursed search function, I pass "depth - 2".
>

That is how I do it as well.  But it seems "wrong" in a way.  You are not
_really_ doing a depth-1 search,you are doing a depth-2, but you are
storing it without regard to the fact that the result came back from a
shallower depth than a normal search from this position.  Which is what I
do, and it seems like it could be problematic...


>Omid suggested that I remove the re-search idea since it is a zugzwang
>optimization.  I tried that and performance was almost identical to R=3, which
>in Gerbil's case is not better than R=2.
>
>I'm going to continue tweaking with this, but for now I have some related ideas
>testing in Ferret.
>
>bruce


I don't like the re-search anyway.  Run it on fine 70 and you will get to
40+ plies instantly and not realize Kb2 is drawing.  As I reported right after
the paper came out, it clearly misses many zugzwang problems...




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.