Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: First win against a crafty clone

Author: Alessandro Damiani

Date: 03:25:50 12/18/97

Go up one level in this thread


On December 17, 1997 at 20:45:32, Don Dailey wrote:

>On December 17, 1997 at 20:14:55, Dan Homan wrote:
>
>>
>>Hi Don,
>>
>>   I do is just what was suggested by Bob and Bruce.  I always
>>use R = 2 and  call the next level of search from my null move code
>>in the following way....
>>
>>     if (depth > R)
>>       best = -pvs(-beta, -alpha, depth-1-R);
>>     else best = -qsearch(ply+1, -beta, -alpha);
>>
>>  If I understand you right, you do something like...
>>
>>     if(depth > R)
>>      best = -pvs(-beta, -alpha, depth-1-R);
>>     else best = static_eval_function();
>>
>>  Where static_eval_function() does something like what you
>>described in your post.  I don't see why this would be slower,
>>although the static_eval_function() might be less accurate than
>>carrying out the qsearch.
>
>My code looks pretty much like you show it here except
>static_eval_function()
>is really a simplified swapoff routine.  It's not slower, it's faster
>than the code you seem to be running (at least on my program.)   Is it
>less accurate?   I don't know because it picks up things null quies
>misses but also misses things null quies gets.  Some versions seem to
>be more accurate because I usually have king safety analysis too,
>usually
>major pieces attacking squares around the king with support.  But the
>current versions doesn't have this yet.   I make conservative
>assumptions
>that pick up tactics that null quies misses sometimes.
>
>But the bottom line is that when I test it, it scores better
>consistantly.
>I'm trying to figure out if I'm missing something in my null quies
>implementation.
>But I know every program responds differently to various algorthims.
>I am still curious about others who may combine swap off analysis with
>null move or simply avoid null move altogether.   Is anyone else out
>there that does this kind of thing?
>
>
>-- Don
>
>
>
>
>
>
>
>
>
>
>
>
>>
>>On move ordering....
>>
>>  Bob's point about using killers after captures because they are faster
>>than  generating the rest of the moves (for use with the history
>>heuristic)
>>is an interesting one.  That must be bit-board thing, because I don't
>>see a good way of doing that in my current framework...   I can see
>>how it would be quite a bit faster if I didn't have to generate all the
>>moves at each ply.
>>
>>  I see your point about testing on more than a few positions.  So far
>>I've been using the WAC positions.  I usually test the first 20
>>whenever I make even a minor change.  (For some reason I still
>>can't get #2,  much to my frustration.)   If I make a larger change,
>>I will usually look at the first 50. I think you are right that looking
>>at the positions individually rather than some average is important,
>>but it is harder if I try to do the full set of 300.
>>
>> - Dan

Hi Don!

I have spent a lot of time this year to make my program Fortress the
most selective as possible. My results are:

1. If we cut off at node n at ply p then the opponent`s reply will not
be searched anymore. So forward pruning should be a function of the
opponent`s reply ("depth" plies). The null-move idea is therefore
straight-forward and simple.

2. Using null-move in forward-pruning we have more possibilities: the
most expensive is full dynamic search of the null-move, the fastest ever
is a static null-move (I think this is what you do by a swap-off routine
instead of quiescence search). So we have:

	full dynamic null-move search: slow, (compared to static eval) but
accurate
	...
	something between
	...
	full static null-move: fast, but inaccurate

My idea is to combine a full dynamic null-move search with a selective
null-move search:

	if ~InCheck(Me) & ~PVnode & CurrentMove[ply-1]#nullmove &
enough_material ->
	   make(null-move);
	   if depth<=4 ->
	      value:= -ThreatSearch(-beta,-beta+1,depth-1)
	   [] depth=5 ->
	      value:= -AlphaBeta(-beta,-beta+1,depth-3)
	   [] depth>5 ->
	      value:= -AlphaBeta(-beta,-beta+1,depth-4)
	   fi;
	   unmake(null-move);
	   if value>=beta -> return value fi
	fi

with ThreatSearch(alpha, beta, depth : int) : int being a selective
alpha-beta search. Currently my ThreatSearch only searches positive
captures (SwapOff>0), save threats (SwapOff>=0) to higher valued pieces,
all pawn promotions and an en passant capture, if it exists. Hung pieces
of the side to move are treated this way: if there is only one then if
it is lost (currently only pinned to a higher valued piece) then best:=
static_eval(position)-SwapOff(square of lost piece) else best:=
static_eval(position). If there are two hung pieces the loss is either
the maximal SwapOff() on those squares if the piece on it is lost or the
minimal SwapOff().

To avoid searching the null-move where it is least effective I have an
additional condition before searching:

	if ~InCheck(Me) & ~PVnode & CurrentMove[ply-1]#nullmove &
STATIC_EVAL(POSITION)>=BETA & enough_material ->
	   search null-move
	fi


What do you think of it?

Alessandro



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.