Computer Chess Club Archives




Subject: Re: Couple of chess programming questions

Author: Robert Hyatt

Date: 10:40:54 09/10/02

Go up one level in this thread

On September 10, 2002 at 09:26:14, Eli Liang wrote:

>A couple of chess programming questions:
>(1) Are there any uses for ProbCut and/or Multi-ProbCut in chess positions where
>the variance of leaf-nodes is low?  What about the use of Warren Smith's BPIP or
>Russell&Wefald's meta-greedy algorithms in chess?  Any wins?
>(2) Is it practical to use partial-order bounding as an improvement over
>conventional static evaluation?
>(3) Reading Aske Plaat's search & re-search paper, it really seems like mtd(f)
>is something of a magic bullet.  But I note it seems that more programs don't
>use it than do (for example Crafty).  What is wrong with mtd(f) which Plaat
>doesn't say?

Nothing.  It is more an issue of a program.  mtd(f) does several searches
using a null-window to bracket the true score.  If you can do a couple of these
and be done, you win big.  If it takes several searches to hone in on the final
score, then you might not win at all.

It is all about how many re-searches you have to do to get the true score,
and that is a function of how your evaluation varies from iteration to
iteration.  For example, if you see your score vary by .3 from iteration to
iteration, then the score from iteration X will be considerably off from the
score at iteration X+1.  And it will take several re-searches to "find" the
new best score, which then changes again the next iteration.

That's a killer.  If your eval doesn't fluctuate that significantly, then
it should be able to "win" for you...

>(4) What are the current thoughts concerning bitboards/rotated-bitboards versus
>conventional 0x88 or other algorithms?  Just looking at open source chess
>programs, it doesn't seem that the chess programming community has come to a
>consensus on relative performance... or have you?

I think that on normal PC hardware, bitboards are "break-even".  They have
their good parts and their downsides.  When you move to 64 bit processors,
the good stays and the downside goes away, leaving them a clear winner.  How
much of a winner is a big question.  Current thinking is 30%, or thereabouts.

>(5) What is currently thought to be the best algorithm for autofitting the
>static evaluation function parameters? DT's least-squares approach seems
>simplest, but it seems as if a dozen other things have been tried too.  Is there
>any best-of-breed approach here?

I think _most_ use trial and error testing...

>(6) Has anyone found any real "practical" benefits to fractional-ply extensions?

Yes.  It gives you a way of limiting things.  For example you can adopt a
policy of "three triggers means two one-ply extensions" easily.  Set the
increment to 2/3 of a ply and when you hit three of them you get 2 plies.

It also allows you to taper the extensions off as you go deeper in the tree,
to control tree growth/explosion.

>(7) Dennis Breuker has a paper comparing 5 different transposition table
>replacement schemes.  Has any else been able to validate his results that
>replacement should be based on the number of nodes searched and not search

Hard to say.  The issue will become hash entry size.  You can store "depth" in
a byte.  You will need at least 4 (and really more) bytes for nodes searched.
Your table will therefore be smaller, and _that_ effect was not measured.  Also
I am not convinced that in endgames, nodes searched is the right idea because
of hash hits that cut trees off that have significant depth, but no nodes due
to the hash hit.

I don't know that he addressed that case either, such as the one seen in fine

This page took 0.02 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.