Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: not using nullmove? [generalized null move]

Author: Dann Corbit

Date: 23:51:01 02/14/04

Go up one level in this thread


On February 14, 2004 at 10:41:15, Robert Hyatt wrote:

>On February 14, 2004 at 01:05:46, Dann Corbit wrote:
>
>>I think the interesting thing about null move is that it is easy to figure out
>>if a move is really bad -- which is the main reason why it causes a benefit.  A
>>tempo advantage is easily calculated by just switching sides.
>>
>>But the idea can be generalized nicely.
>>
>>Suppose that you have a seven ply search performed and it is time to do an 8 ply
>>search.  Suppose (further) that from the root node only you never analyze the
>>root itself but instead analyze all of the child nodes of the root and save the
>>values.
>>
>>Now, you have some sort of estimate for the "goodness" of each move. Things
>>might get reversed on the next ply, but it isn't terribly likely.  After all, we
>>look at crappy moves and search them less deeply with the null move heuristic.
>>
>>Now, suppose that we have 10 moves available:
>>
>>move 0 has an evaluation of +100 centipawns
>>move 1 has an evaluation of +80 centipawns
>>move 2 has an evaluation of +77 centipawns
>>move 3 has an evaluation of +60 centipawns
>>move 4 has an evaluation of +53 centipawns
>>move 5 has an evaluation of +19 centipawns
>>move 6 has an evaluation of +5 centipawns
>>move 7 has an evaluation of -10 centipawns
>>move 8 has an evaluation of -110 centipawns
>>move 9 has an evaluation of -910 centipawns
>>
>>Will we (knowing this) want to search them all the same?
>>
>>Why not use a logarithmic scale based on the difference between the best
>>possible move and the move under consideration?
>>
>>Another advantage:
>>Zugzwang sorts itself out.
>>
>>We can also add refinements like increasing the shear with mobility increase
>>(greater reductions if I have 90 choices than if I have 12).
>>
>>Using estimates from a database also works nicely.  Unfortunately, hash table
>>estimates have not worked out well for me (for attempts using other than the
>>root).
>
>There is one other major flaw.  You just made the search _many_
> times slower.
>
>:)
>
>The math:
>
>Doing an 8 ply search.  Assume a branching factor of 38.
>
>the 8 ply search will search
>
>N1 = 38^4 + 38^4 + 1 leaf nodes.
>
>the 38 x 7 ply search will search
>
>N2 = 38 * (38^4 + 38^3 + 1) nodes
>
>N1 = 4,170,273
>
>N2 = 81,320,342
>
>N2 / N1 = 19.5 times slower.  Not a good thing.
>
>This is the penalty you pay if you want the ply=2 scores.  Alpha/beta only gives
>the best move / ply=1 score...  Additional information has huge additional cost.

A few small differences:
1.  You use the [initially] full searches only to depth -1 (they are the IID
replacement).  I thought I had mentioned that, but re-reading what I said above
I left it out.

2.  The full search information is used to progressively shorten the side
searches (up to a limit so that if you search deep enough you will always find
any tactical shot).

3.  The information obtained in previous plies is applied to each additional
side search.

Maybe with this added information, the search numbers won't look so bad.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.