Author: Georg v. Zimmermann
Date: 06:15:35 06/25/02
Hi,
yesterday tried to implement an idea I saw in Crafty about not trying a nullmove
if hash table tells us it will probably not work:
// NullMove : passing should be worse than any other move.
if ((NULL_REDUCTION)
// no null move if hash tells us position is bad
( (te->depth < alphadepth - NULL_REDUCTION * ONE_PLY)
|| (te->type == FAIL_HIGH)
|| (te->value >= beta) )
&& (AIBoard.nullCondition(alphadepth)) )
{
// NullMoveAlgo [...]
}
This avoids thousands of useless null move tries in my tests. While when I let
it spit out positions where this would have prevented a successfull null move
try I only got 4 in 60 mill. nodes.
Regardless it didnt reduce overall node-count in my tests by more than 1% !
What can be the reasons for that ?
Maybe the 4 positions where the idea fails are a big factor because they take
almost as long to search than all the shallower null move searches ? Sounds
unlikely.
Maybe the shallower null move searches are needed as kind of "Internal null move
deepening": when at a higher depth suddenly the hash tells me not to not try
null move anymore I dont have info from shallower null moves in the TT ?
Any ideas how to test those assumptions ?
Regards,
Georg v. Zimmermann
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.