Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: sliding attacks in three #define

Author: Omid David Tabibi

Date: 01:44:20 04/14/04

Go up one level in this thread


On April 13, 2004 at 22:02:47, Christophe Theron wrote:

>On April 13, 2004 at 16:10:12, Omid David Tabibi wrote:
>
>>On April 13, 2004 at 14:11:04, Christophe Theron wrote:
>>
>>>On April 13, 2004 at 09:26:54, Vincent Diepeveen wrote:
>>>
>>>>On April 12, 2004 at 14:45:28, Christophe Theron wrote:
>>>>
>>>>>On April 12, 2004 at 07:50:47, Tord Romstad wrote:
>>>>>
>>>>>>On April 12, 2004 at 00:09:48, Christophe Theron wrote:
>>>>>>
>>>>>>>On April 11, 2004 at 13:52:59, Tom Likens wrote:
>>>>>>>
>>>>>>>>On April 10, 2004 at 21:53:17, Christophe Theron wrote:
>>>>>>>>
>>>>>>>>>On April 10, 2004 at 15:55:17, Tom Likens wrote:
>>>>>>>>>>
>>>>>>>>>>I'm not sure where I come down on the bitboards vs. non-bitboard
>>>>>>>>>>architectures.  My engine is a bitboard engine, but that doesn't
>>>>>>>>>>necessarily mean that the next one will be bitboard based.
>>>>>>>>>>
>>>>>>>>>>I don't believe though, that because no bitboarder has topped the
>>>>>>>>>>SSDF list that this really constitutes any kind of proof.  My strong
>>>>>>>>>>suspicion is that if all the top commercial programmers converted
>>>>>>>>>>over to bitboards tomorrow (yourself included) that *eventually*
>>>>>>>>>>their new engines would again rise to the top of the SSDF.  I'm
>>>>>>>>>>beginning to suspect that creating a strong (i.e. world-class) engine
>>>>>>>>>>involves a helluva lot more than just the basic data representation,
>>>>>>>>>>but instead involves...
>>>>>>>>>>
>>>>>>>>>>1. 24/7 dedication
>>>>>>>>>>2. A *real* way to measure progress
>>>>>>>>>>3. A selective search strategy that works 99.99999% of the time
>>>>>>>>>>4. Attention to about 2^64 minor details
>>>>>>>>>>5. A failed marriage (okay, maybe this is extreme but you see the point)
>>>>>>>>>>
>>>>>>>>>>regards,
>>>>>>>>>>--tom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Number 5 (or something close) was the reason why Tiger has made such a progress
>>>>>>>>>between 1997 and 1999. :)
>>>>>>>>>
>>>>>>>>>Number 2, seriously, is worth spending several months on it.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    Christophe
>>>>>>>>
>>>>>>>>This has been my main focus over the past few weeks.  It's become readily
>>>>>>>>apparent to me that the improvement slope from here on up is much steeper
>>>>>>>>and I rather not waste my time implementing features that I can't properly
>>>>>>>>test.
>>>>>>>>
>>>>>>>>regards,
>>>>>>>>--tom
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>That's the secret of real professional chess programmers.
>>>>>>
>>>>>>Of course you don't want to reveal your secrets, but it would be interesting if
>>>>>>you could answer
>>>>>>the following question:
>>>>>>
>>>>>>Assume that you make a change to your engine which improves the playing strength
>>>>>>by
>>>>>>about 10 Elo points.  How many hours of CPU time do you need before you are sure
>>>>>>that
>>>>>>the change was an improvement?
>>>>>>
>>>>>>Tord
>>>>>
>>>>>
>>>>>
>>>>>I would say approximately one week, and I would not even be really sure it is an
>>>>>improvement. We are talking about a 1.5% improvement in winning percentage here,
>>>>
>>>>That's very quick. I remember 1 test of diep which took 3 months to get tested
>>>>by Jan Louwman. 1000 games in total. About 500 for version A and about 500 for
>>>>version B. Level 40 in 2.
>>>>
>>>>>it's below the statistical noise of a several hundreds games match if you want
>>>>>95% reliability!
>>>>
>>>>>And unfortunately a 10 elo points improvement is becoming rare for me. Most of
>>>>>the changes I try make the program weaker, and many changes do not provide any
>>>>>measurable improvement!
>>>>
>>>>Another sneaky problem is that if you try to modify some behaviours of your
>>>>program, it first will score less until everything is retuned.
>>>>
>>>>>That's why not having a strong test methodology is totally out of question if
>>>>>you are serious about chess programming.
>>>>
>>>>I agree here with you.
>>>>
>>>>At the same time mentionning that a good test methodology has its limitations
>>>>too. If you just believe the tests without taking into account other factors
>>>>then it is a wrong concept too.
>>>>
>>>>Note that diep can use some testing, nothing is done systematic currently and
>>>>the number of testgames can be counted at 1 hand a day, if there is anything to
>>>>report at all ;)
>>>>
>>>>>Even with a good test methodology chess programming is still an art: in many
>>>>>cases you have to decide with your feelings, because the raw data does not give
>>>>>you a definite answer.
>>>>>
>>>>>Now of course there are small improvements that I do not even need to test for a
>>>>>long time: if I find a way to make my program 10% faster without changing the
>>>>>shape of the tree, then all I need to do is run some safety tests that will only
>>>>>look at the number of nodes searched on a large set of positions and compare it
>>>>>to the last stable version.
>>>>
>>>>Let's discuss that "without changing the shape of the tree". I suspect you mean
>>>>by that forward pruning.
>>>
>>>
>>>
>>>As Uri as pointed out, not changing the shape of the tree means to search
>>>exactly the same tree.
>>>
>>>Any kind of pruning, different extension scheme or even changing the evaluation
>>>changes the shape of the tree.
>>>
>>>I was talking about searching the same tree, but faster because of some speed
>>>optimization.
>>>
>>>
>>>
>>>
>>>>So far tests have convincingly showed (take the 1000 games of Louwman) that not
>>>>a single forward pruning experiment tried so far for DIEP has resulted in better
>>>>play.
>>>>
>>>>There is 1 week to go to now to the ict4 and i see that i can possibly win 1500
>>>>dollars. Now the problem is this time no good hardware for DIEP so i'll have to
>>>>find some spectacular improvements in DIEP in order to win that price :)
>>>>
>>>>Now i decided that forward pruning in the entire tree (i do not see nullmove as
>>>>a problem by the way) that so far all those experiments failed for DIEP.
>>>>
>>>>Plydepthleft 3 to 4 in DIEP i do next type of nullmove:
>>>>
>>>>  score = -qsearch( .. );
>>>>
>>>>  if( score >= beta ) return score;
>>>>
>>>>Of course as already explained it's not implemented like this because i'm
>>>>non-recursive, but it very well describes the rudeness in which i do things.
>>>
>>>
>>>
>>>Do I understand your code correctly? You are not doing any null move before
>>>calling qsearch, right?
>>
>>No, he is doing normal null-move pruning. Take a look at the standard null-move
>>pruning pseudo-code:
>>
>>
>>int search (alpha, beta, depth) {
>>    if (depth <= 0)
>>        return evaluate(); /* in practice, quiescence() is called here */
>>    /* conduct a null-move search if it is legal and desired */
>>    if (!in_check() && null_ok()) {
>>        make_null_move();
>>        /* null-move search with minimal window around beta */
>>        value = -search(-beta, -beta + 1, depth - R - 1);
>>        if (value >= beta) /* cutoff in case of fail-high */
>>            return value;
>>    }
>>    /* continue regular NegaScout/PVS search */
>>    . . .
>>}
>>
>>
>>
>>The line that calls the null-move search is:
>>
>>    value = -search(-beta, -beta + 1, depth - R - 1);
>>
>>which means that the last parameter (depth) will be 0 if:
>>
>>    depth <= R + 1
>>
>>Vincent is using R = 3, so his code is:
>>
>>    value = -search(-beta, -beta + 1, depth - 4);
>>
>>which means that whenever depth <= 4, he is calling the null-move search with
>>depth = 0, which is in fact a direct call to quiescence.
>
>
>
>Hence my comment below.
>

Well, to the best of my knowledge what Vincent is doing is the conventional form
of null-move pruning. Most engines that apply null-move pruning do the exact
same thing, possibly with the difference of using R = 2 instead of R = 3.



>
>
>    Christophe
>
>
>
>
>
>>>You call qsearch (a capture-only search, maybe throwing in some checking moves)
>>>instead of doing a normal (full-width) search.
>>>
>>>I don't remember having ever tried to do that.
>>>
>>>At first sight, I don't think this will be very efficient. If a capture move is
>>>to fail high, it is going to fail high very quickly anyway because most chess
>>>programs try captures first. So I don't think you will get a huge speedup from
>>>this.
>>>
>>>On the other hand, you are clearly going to miss many 4 plies tactical shots at
>>>the end of the tree.
>>>
>>>Have you tested this system thoroughly? Does it really give an improvement over
>>>a non-pruning version?
>>>
>>>Or maybe I don't understand and you do a null-move before calling qsearch, and
>>>in this case it's not really different than having a null-move search with R=3.
>>>
>>>
>>>
>>>
>>>>Measuring my tree i see that the vaste majority of nodes searched by diep are
>>>>qsearch nodes. About 80% of all nodes are in qsearch.
>>>>
>>>>I'm interested now in redoing a forward pruning experiment.
>>>>
>>>>The current thought is to forward prune at the last ply. So i try to select a
>>>>few moves there and play them.
>>>>
>>>>Now of course the amount of pruning is dependant upon how many moves you try
>>>>there.
>>>>
>>>>Yet i'm wondering what i can potentially win in search depth just weeding in
>>>>that last ply.
>>>>
>>>>What is your estimate?
>>>
>>>
>>>
>>>I guess you can get a 10 to 20% speedup of your search (searching 10 to 20% less
>>>nodes overall).
>>>
>>>You will not get an additional ply of search from this. Maybe 0.2 or 0.3 plies.
>>>
>>>
>>>
>>>
>>>    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.