Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: not using nullmove? [generalized null move]

Author: Robert Hyatt

Date: 12:39:13 02/14/04

Go up one level in this thread


On February 14, 2004 at 11:02:09, Russell Reagan wrote:

>On February 14, 2004 at 10:41:15, Robert Hyatt wrote:
>
>>There is one other major flaw.  You just made the search _many_
>> times slower.
>>
>>:)
>>
>>The math:
>>
>>Doing an 8 ply search.  Assume a branching factor of 38.
>>
>>the 8 ply search will search
>>
>>N1 = 38^4 + 38^4 + 1 leaf nodes.
>>
>>the 38 x 7 ply search will search
>>
>>N2 = 38 * (38^4 + 38^3 + 1) nodes
>>
>>N1 = 4,170,273
>>
>>N2 = 81,320,342
>>
>>N2 / N1 = 19.5 times slower.  Not a good thing.
>>
>>This is the penalty you pay if you want the ply=2 scores.  Alpha/beta only gives
>>the best move / ply=1 score...  Additional information has huge additional cost.
>
>This sounds to me like you're leaving out part of the picture :) He _is_ getting
>something pretty significant in return for the speed hit (very accurate scores,
>relatively speaking). This would be like me saying that iterative deepening is
>stupid because you just end up doing N searches and you waste time. If you did
>iterative deepening and didn't take advantage of the scores from previous
>iterations, that _would_ be stupid, but no one does that. Just like he isn't
>taking on a 20x speed hit for nothing.

It isn't the _scores_ that are useful.  It is the _best move_.  Iterative
deepening works without hashing if you simply transfer the N-1 ply best moves to
the N ply search...

So, where does the 20x slowdown produce any 20x+ improvement?  That is the $64M
dollar question...



>
>Initially it may be 20x slower, but if you are able to do pruning and
>reductions, your effective branching factor drops, and that is better than any
>constant loss in speed. Exponential improvement at the cost of a linear
>overhead. Right?

The above is _exponential_ as well.  The point is that he has to get a tree
reduction of 20X just to break even.  I don't see how that will happen.  And
if it does, it just breaks even, which means more complex code to produce the
same result...  Again a questionable idea...

If the 20x cost results in a 30X tree reduction, then that is a different matter
of course, but you do have to overcome the basic alpha/beta savings first...
and it isn't clear it can be done...






This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.