Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB will never play with REBEL, they simple are afraid no to do well

Author: Robert Hyatt

Date: 12:15:24 10/18/99

Go up one level in this thread


On October 18, 1999 at 12:23:00, Ratko V Tomic wrote:

>> I don't think the 'difference' matters...
>
>The display showed total time for all iterations, and although the last one is
>always the largest, to get time/iter one needs to subtract adjacent totals to
>get time per iteration, before finding the ratios of the latter.
>

that is what I did if you notice.  By 'difference' I meant the difference in
the numbers when you compare total time vs total nodes.


>> whether you count nodes, or
>> you use time per iteration, it really doesn't matter...  but the time
>> per iteration is _far_ better...
>
>The time is an approximate and a very rough substitute for node count, since it
>is, at best, a sum of several different exponential terms and polynomials (in
>nominal depth or iteration number). Whether it is better than node count,
>depends what are you measuring it for. If you are interested in estimating time
>for some deeper iterations, then yes, time is better. But for other purposes
>node counts may be more interesting number.
>
>> because your node count method is
>> dead wrong.  it is assuming that "d" is a constant.  It is not.
>> D is very dynamic in crafty, particularly beyond the opening.
>
>The concept of effective branching factor (EBF) is already an artificial
>construct since the actual branching factor varies from node to node anyway. The
>whole idea behind it is to get a simpler tree (of fixed depth and width) as a
>managable substitute for the real search tree in order to do some calculations
>on the simpler model. With the real search tree one could only measure (count)
>various parameters during execution. With the simplifed one we can get rough
>estimates woithout measuring. Of course, we're all well aware of the limitations
>of such simplifed model (built into EBF concept), and that's why I suggested 2
>alternative methods to measure/sample the actual proportion of "junk"
>evaluations/visits and find whether the proportion of "non-junk" drops
>exponentially to 0 with the increase in the (nominal) search depth.


I understand.  But a program takes 3x longer to go from N to N+1, yet with your
node approach, it comes out 4.8.  I would say that type of calculation is wrong
on many grounds.  First fitting a simple alpha/beta fixed-depth search to a
number of nodes from a search with significant extensions and reductions makes
your 4.x number pretty much meaningless.  At least in terms of 'branching
factor" discussions.

"branching" factor can mean (a) average nodes at any ply or (b) multiplier to
compute how long the next iteration takes, knowing the time this iteration took.

(b) is reasonable for any search, no matter how selective, non-selective, no
matter how many extensions or reductions it uses, etc.  (a) gives a meaningless
number, unless you first 'calibrate' it to the same program but with no extra
extensions/reductions allowed anywhere.  And then how do you decide whether 4.8
is better or worse than 'normal'.

It is simpler to say each iteration takes about 3x longer than the last, so
the effective branching factor is 3.  Because somehow _that_ program is doing
something to drive the EBF down lower than other programs...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.