Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB will never play with REBEL, they simple are afraid no to do well

Author: Robert Hyatt

Date: 07:52:04 10/16/99

Go up one level in this thread


On October 16, 1999 at 03:54:35, Ratko V Tomic wrote:

>> yes, 3 is pretty typical, although it steadily drops as material
>> comes off.
>
>I tried Crafty 16.6 within Fritz UI (default engine settings, 64Mb hash) in a
>middle game position with 38 avaliable moves. The depths tested were 10, 11 and
>12 plies. The effective branching factor (calculated from the total nodes N it
>displayed at the end of search as: B=ln(N)/ln(D)) ranged from 4.5 to 4.7. In any
>case, even that is substantially smaller than 6, which is the lower bound for
>evaluating the exact (ignoring inexactness of evaluation function itself)
>minimax value of the tree with branching factor 38. That means (regardless of
>whether its branching factor was 3 or 4.5) that the value obtained by Crafty can
>only be an approximate minimax value, i.e. the search isn't equivalent (in the
>final minimax value returned) to the full width search, as for example a pure
>alpha-beta with perfect node order (which would have B=6) would have returned.
>
>So we have two levels of approximation, one due to the inexactness of the
>evaluation function and another due to the inexactness of the minimaxing itself
>(as done by Crafty and other state of the art programs). Presumably, the program
>would be the most efficient if these two sources of error have roughly the same
>size of error, i.e. there is no point in wasting time in minimaxing rigorously
>(at the expense of B=6) when the leaf nodes are being evaluated approximately
>anyway.
>

the issue is 'null-move'  that selectively reduces the depth on non-critical
branches, which reduces the effective branching factor.  if you try several
different positions, you ought to see numbers way below 4.5 on average...  Note
that tactical tests are the wrong kind of positions...  because they, by
definition, are not searched optimally...



>
>> as far as 'junk' I have no way of measuring that, since alpha/beta
>> doesn't provide 'how bad' a move is compared to another, just 'better'
>> or 'worse'.
>
>Assuming we call "junk" every _evaluated_ node with value, say, J=1/2 pawn below
>the PV value, then two passes could obtain the percentage of "junk" evaluations,
>by using the first pass to get the root value VR. Then in the second pass, any
>call to Evaluate() (or if retrieved from the hash table) upon return would check
>whether the returned value V is greater than VR-J, and if not it would increment
>JunkCnt variable; and in any case it would increment NodeCnt. That would give us
>the proportion of "junk" nodes being evaluated.

there are still massive problems.  IE lazy evaluation.  And when searching
moves from position "X", the first might produce a score of -.1 which is enough
to cause a cutoff.  But if you searched further, the second might produce a
score of -2, which is even more than enough to cause a cutoff.  But alpha/beta
doesn't search the second...



>
>An alternative approximate method would be to extract a sample (by saving the
>position), of say 10,000 randomly chosen nodes that went through Evaluate(), or
>even the nodes that were visited at all (i.e. interior nodes may be allowed).
>Then, upon completion, the sample nodes would get evaluated with the full search
>to reasonable depth, say 10 plies, exactly as if they were root nodes. If this,
>more accurate, value is below the VR-J junk boundary (VR for the originally
>picked root move), the JunkCnt would get incremented. This sample would thus
>produce estimate of the junk and non-junk with roughly 1% accuracy.
>
>Obviously, this is not literally a "junk" within the given minimax search
>algorithm, since in that context these nodes (or another equivalent node subset)
>had to be evaluated. But they only serve as a rough measure of what kind of
>moves a very strong human player would judge (with plenty of time given for the
>evaluation) as clearly non-viable (or "junk"), but imagining he could pursue and
>evaluate all the possible viable moves as they branch off into a tree of the
>same depth as in the experiment above. That set of moves (or positions produced)
>would be "non-junk" and the rest would be "junk".
>
>I believe that even for the state of the art programs this kind of non-junk
>nodes (as a GM, or a deeper search, might judge them, with enough time allowed)
>makes a fraction of the total nodes (all nodes that program evaluates) which
>goes exponentially to zero with the depth of the search. In other words, the
>common sense efficiency of the algorithms diminishes exponentially with the
>depth of the search.


I wouldn't disagree.  minimax has a huge number of 'junk' positions by any
definition.  However, the search can't be executed without them.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.