Author: Tony Werten
Date: 00:40:20 08/11/01
Go up one level in this thread
On August 09, 2001 at 17:20:26, Miguel A. Ballicora wrote:
>On August 09, 2001 at 16:09:31, Dieter Buerssner wrote:
>
>>On August 09, 2001 at 12:46:36, Bruce Moreland wrote:
>>
>>>On August 09, 2001 at 10:33:51, Dieter Buerssner wrote:
>>>
>>>>I am using a null move algorithim, that is functionally similar to the "double
>>>>null move" Vincent has explained here some times.
>>>
>>>I don't recall what that was, and I am interested enough that I'm embarass
>>>myself by asking: What was that?
>>
>>I think, Vincent's formulation is more or less. If in the search tree, the last
>>move was a null move, and the move before this was no null move, do a null move
>>again. So, Instead of one null move, you allways do something like
>>
>>1. move null 2. null move
>>
>>first.
>>
>>So, when the 2...move does not fail high, the first null move will be
>>automatically refuted. This will avoid Zugzwang problem (at higher depth).
>>
>>My formulation is a bit different.
>>
>>Before doing the null move, do a search at even more reduced depth as
>>depth-(R+1) for the side to move. Only when this search fails high (in zero
>>window at beta), try the null move. If this does not fail high, skip the
>>nullmove.
>>
>>Something like:
>>
>> if (all conditions for null move met)
>> {
>> /* The next 2 lines are additional to a "normal" null move search */
>> score = absearch(beta-1, beta, depth-((R1+1)+(R2+1)), ply, side);
>> if (score >= beta)
>> {
>> /* do a null move */
>> score = -absearch(-beta, -beta+1, depth-(R1+1), ply+1, otherside);
>> /* check for null move cutoff, etc. */
>> }
>> }
>>
>>Note, that when the first (very shallow) search fails high, but the normal
>>nullmove search doesn't, you can still use some info for move ordering. The
>>move, with which the first absearch failed high, will be a good candidate to try
>>early later in the search.
>>
>>As you can easily see, this will yield in less nullmove cutoffs. But it should
>>avoid exactly those cutoffs, that are possibly incorrect. I actually only
>>experimented a little bit without this additional search (I implemented this,
>>before I had known about recursive null move). In middle game positions it costs
>>some nodes. In one single match test I have done, it even performed slightly
>>worse, than a "normal" nullmove algorithm, but not really statistically
>>significant. However, I prefer, that with this idea, you can never really fail
>>to find some mate in 2 in some obscure Zugzwang position (and thereby looking
>>really stupid). Of course, you may need more depth than expected, to see it.
>>
>>Regards,
>>Dieter
>
>I am doing something different that ends up to be similar in concept.
>I do a regular nullmove, but if it fails high, I do not return beta, I just
>reduce the depth = depth-(R+1) and do a normal search.
Then why do a nullmove first ? You're doing double work now. This reduced depth
razoring is an alternative to nullmove.
cheers,
Tony
>So, you do first a shallow search and then a nullmove. If both fails high, you
>cut off. I do a nullmove and then a shallow search, If both fails high I cut
>off. I avoid too the problems with zugswangs. Your case might be better in those
>cases when the shallow search does not fail high, because you do a search at a
>normal depth later. However, in my case I just have to rely on the result of the
>shallow search.
>
>Regards,
>Miguel
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.