# Computer Chess Club Archives

## Messages

### Subject: Re: Verified Null-Move Pruning, ICGA 25(3)

Author: Uri Blass

Date: 20:50:08 11/21/02

Go up one level in this thread

```On November 21, 2002 at 22:56:40, Vincent Diepeveen wrote:

>On November 21, 2002 at 21:39:06, Uri Blass wrote:
>
>
>
>>On November 21, 2002 at 21:21:09, Vincent Diepeveen wrote:
>>
>>>On November 20, 2002 at 17:51:40, Alessandro Damiani wrote:
>>>
>>>'verified' nullmove, or in a different implementation but
>>>same algoritm, with just 1 ply reduction is nearly a fullwidth
>>>search.
>>>
>>>I did with a bigger reduction of course. that's also very
>>>costly compared to R=3. This was just an experiment carried
>>>out years ago when it was described in ICCA journal. Now
>>>we have same algoritm in a few lines diff algorithm.
>>>
>>>I do not see how Omid can just suffer 50% slowdown of his
>>>algorithm. Note he only publishes search depths not
>>>search times. That is wrong.
>>>
>>>You must publish search times.
>>
>>I do not see how you can suffer more than 50% slow down.
>
>Of course you suffer more than 50% slowdown. Every person with
>a decent chessprogram will suffer more than 50% slowdown.
>
>This is trivial.
>
>>I think that you simply do not understand the algorithm.
>
>I understand it perfectly well. I have run with it for years.
>

No

I also got results that it is not slow as you describe and I have a good
branching factor.

>
>>The algorithm does not do nearly full width search because after the first
>>reduction the search is normal null move pruning without verification.
>
>Yes it nearly does, it is a reduction of just 1 ply!
>
>With nullmove you reduce in the same subtree 4 ply. Or with R=2 you
>reduce 3 ply!

With R=2 you reduce 3 plies in a recursive search.

Here you reduce 4 plies after the first reduction and you do the first expensive
reduction only in part of the cases.

>
>Do you understand that with a branching factor of 3.5 that
>a search depth of 2 ply more means about a factor of 10 times more
>nodes more or less and not factor 2?

The big factor is only in part of the cases and the big factor is not a factor
of 10 because after the first reduction you never waste time on expensive
searches.

Suppose that you have a line like 1.e4 null 2.d4 null

When you get a fail high after the second null you search to depth-4 and do not
verify the fail high by depth-1 search.

Uri

```