Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Repeatability (questions for Omid)

Author: Omid David Tabibi

Date: 16:11:59 12/18/02

Go up one level in this thread


On December 18, 2002 at 18:43:26, Bruce Moreland wrote:

>On December 18, 2002 at 13:13:09, Robert Hyatt wrote:
>
>>Actually I found it "non-conclusive" as I reported.  It helped in some places,
>>hurt in others, and the end-of-term stuff here (and then the SMT stuff on this
>>new machine) side-tracked me for a while...  I still have plans to play with it
>>further.
>
>So you took an initial stab at repeating this and failed, if I can read between
>those lines.
>
>I implemented this in Gerbil last night and ran it.
>
>I found that this was inferior to both R=2 and R=3 at every one second interval,
>with ECM, between 1 and 20 seconds.
>
>Meaning, that it never produces more solutions in a given number of seconds.
>
>General R=3 also is never better than R=2, given this testing methodology, in
>Gerbil.
>
>It is possible that I implemented it wrongly.  There are a couple of things that
>I don't understand and had to guess about:
>
>1) When the null-move search comes back with fail high, and verify is true, I
>will do a regular search with reduced depth.  If this search fails high, I cut
>off like normal.  If this search does not fail high, I have to re-search with
>the original depth.  What I don't understand is what I do if the initial reduced
>depth search modified alpha.  I am assuming that I put alpha back the way it was
>and start over.
>

Some programs have instabilities which result in excessive re-searches using
this algorithm. At first stage don't do any re-searches, and when all other
things work fine, implement the re-search.

The tactical strength is not affected by the re-searches. They merely help avoid
zugzwangs.

Reducing two plies instead of one, after a fail-high report might work better
for some programs. Test them both.

Each program has a certain depth, which beyond it, vrfd R=3 *always* constructs
a smaller tree. For some programs this threshold might be higher, so conduct
your experiments in higher depths.


>2) I don't know if this implementation allows two consecutive null moves or
>what.  In specific, I don't know what "null_ok()" does.  I am assuming that if I
>don't allow two null moves in a row already, I can continue to not allow two
>null moves in a row.
>

I didn't allow two null moves in a row.


>3) I am assuming that if the search cuts off when doing the reduced depth
>searches, that the depth record in the hash table should be the original depth,
>and not the reduced depth.
>

I used the reduced depth, as recording the original depth would be misleading.


>I can't find any bugs in my implementation, if my assumptions were correct.
>

Find detailed discussions of all these points in the archives.

Omid.



>bruce



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.