Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Fractionial R for Null Move?

Author: Harald Lüßen

Date: 14:15:08 08/16/04

Go up one level in this thread


On August 15, 2004 at 17:33:39, Bruce Cleaver wrote:

>Here's an idea: most programs implement nullmove with R = 2 or R=3 (even
>adaptive nullmove uses R=1, 2, or 3).
>
>Suppose the truly optimal value for R is at 2.2 (not 2.0), the idea being that
>you always reduce the search 2 plies, and then 20% of the time (done
>probabilistically) reduce 3 plies (i.e. if random() <= 0.2, R = 3 else R = 2).
>The same goes for R = 3, or whatever integer value you are using.
>
>I know it goes against the grain having a non-deterministic approach, but an
>extra
>20% of the search done at R = 3 vice R = 2 could yield large benefits (or,
>terrible blunders, of course).  R = 4 is way too large by experience, but maybe
>R = 3.1 is better than R = 3, and R = 2.5 is better than R = 2
>
>Just an idea  :)

I have tried this approach in my engine but I still have no clear result.
I used fractional distance changes in many places after I made the
changes for the usage of fractional plies. I tried it in extensions,
pruning and as a nullmove reduction. My problem is: there are too much
possibilities. I tried to make it dependent on ply, remaining depth,
material, game stage, positional evaluation, threads, king safety ...
I can experiment with the distance fractions or the margins for the
whole decision. If there are indicators that say the nullmove (pruning)
will work, shall I decrease or increase the margin or the remaining
distance. With every 'improvement' I made the engine played weaker. :-(

The last battlefield looked like this:
#else // USE_PARTIAL_R_ADAPT()
                int r_distance = distance / 3;
                int r_adapt = r_distance;
                value_type max_value = MAX( position.wmaterial_ ,
position.bmaterial_ );
                           max_value = MIN( max_value, PAWN_VALUE * 20 );
                static int r_material_m[] = { -256, -224, -192, -160, -128, -96,
-64, -32, 0, 32, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104 };
                value_type r_material = r_material_m[max_value / PAWN_VALUE] *
PLY_STEPS() / 64;
                r_adapt += r_material;
                value_type tmp_eval = MAX(-10 * PAWN_VALUE, MIN(eval, 10 *
PAWN_VALUE));
                static int r_material_a[] = { -60, -54, -48, -42, -36, -30, -24,
-18, -12, -6, 0, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60 };
                r_material = r_material_a[tmp_eval / PAWN_VALUE] * PLY_STEPS() /
64;
                r_adapt += r_material;
                r_adapt += king_safety / 4;
                value_type value;
                r_adapt = MIN( r_adapt, PLY_STEPS() * 13 / 4 ); // 3.25
                if ( r_adapt <= PLY_STEPS() * 7 / 4 )           // 1.75
                {
                    // Indicate no null move cutoff
                    value = -infinity;
                }
                else
                {
                    value = - search( - beta, - beta + 1, distance - r_adapt -
PLY_STEPS() );
                }
#endif

I think I have tried too much too fast without proper testing and
book keeping of the changes and results. And then there is the
possibility that one big error covers everything else.

That's also the reason why there is no recent new version of the
Elephant. There were improvements but I have to step back and relax
a bit before I continue programming again.

Harald



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.