Author: Gian-Carlo Pascutto
Date: 13:04:56 02/14/00
Go up one level in this thread
On February 13, 2000 at 20:04:19, Alessandro Damiani wrote:
[snip]
>Now, the problem is to score the moves. All moves in a node are made and
>evaluated. If you do that only with the static evaluation function you will get
>wrong scores. For instance, white to move can capture the black rook. By using
>only static scores you evaluate that capture as a win. But a quiescence search
>could reveal that black captures your white queen, and therefore capturing the
>rook is a loss!
Both the rook and queen capture moves will look singularly better when compared
to the alternatives. ABC will search this line deeper, find out it's no good,
and dump it. No gain in using a qsearch here.
>This happens, for example, when a white knight is trapped. Every move of the
>knight loses material. If you evaluate these moves with the static evaluation
>function you won't see that. The quiescence search evaluates every knight move
>correctly as a loss. Now, let's assume the knight has only one escape square.
>There is only one move that saves the knight, all the other moves lose.
>The scores of the moves could be (by evaluating with the quiescence search):
>
>1. score(m0) = 0 (escape move)
>2. score(m1) = -3 (losing knight move)
>3. score(m2) = -3 (losing knight move)
>4. score(m3) = -3 (a quiet pawn move, the knight will be lost)
>5. score(m4) = -9 (moves the queen to an unsave square, the queen will be
>lost)
>
>In this example m0 is singular. If the scores would have been calculated only
>with the static evaluation function you would get a worse measure of >forcedness.
>
>And the measure of forcedness has a great influence on the convergence of the
>search!
True, but the cost of calling the quiescent search and the cost of searching
this line one ply deeper with an AB-C search would probably be very similar,
because they work very alike. So again, you're not very likely to get much
improvement, if at all.
>
>Note that by using the quiescence search you do a 1-ply search of all moves.
>could generalize this to search all moves k plies deep. For example k:= depth-
>With R:= 1 you get Singular Extensions (with some exceptions). Just a thought.
What's the difference between this and looking at those moves with a ABC search
of the same depth? The beauty of AB-C is that any kind of forcedness is
extended, be it a check, a singular move, a recapture...
>Did you try quiescence scores?
Yes, my program ran a lot faster (as fast as normal AB), but didn't get as deep
in the same time, so it was a loss. Very strange.
>Did you try to speed up near the horizon like
>described in the paper?
No. I want to get to know the basic algorithm first before adding the more
dubious extensions like the Static Option Estimators.
Currently I'm not looking at the actual search speed or solution times, but
rather if it finds a move or not, and if so, how much nodes did it need when
compared to alpha-beta.
If alpha-beta needs 50000 nodes to see a move, at 12500nps, and ABC needs
2000000 nodes to see the same move, at 2700nps, using ABC will be a loss
no matter how fast I get it.
Unfortuantely, that's the situation I'm in today.
>I had another idea: in the paper the mapping
[snip]
>That means a move with no alternative moves gets a depth reduction of 0 and is
>searched a full ply deeper. This may produce very deep trees.
And very slim ones, because they are only one node wide...
>This way a very forced move has a depth reduction of 1 (it is searched with
>depth-1), while not forced moves with depth reduction 3 (they are searched with
>depth-3).
>
>And your search will be deep at most the search-depth set at the root. What do
>you think?
It tried this. I got a lot deeper a lot faster (quite logical because the
definition of depth changes when f changes), but ABC still performed very badly.
I did find a flaw in my implementation of the algorithm though. When counting
the conspiracy depth, the actual node does should not count as an alternative.
In my implementation it does. Am I correct to assume the effect of this is
minimal as it's just that one move that is counted too much ? Having it this
way allows for a lot cleaner coding...
Also, when I implemented the ABC search I translated it to a negamax framework.
I wonder if my translation is actually correct:
Procedure ABC(node, alpha, beta, dmax, dmin, maxoptions, minoptions)
if terminate(node, alpha, beta, dmax, dmin, maxoptions, minoptions)
return eval()
v = alpha;
for each child:
next-max-options = sibling-options + max-options
v = max(v, ABC(newchild, -beta, -alpha, dmin, dmax, minoptions,
next-max-options));
if (v >= beta) return v;
endfor
return v;
EndProc;
Procedure terminate(node, alpha, beta, dmax, dmin, maxoptions, minoptions)
v = eval();
if (v >= beta) and MaxDepth(beta, maxoptions) >= dmax then return true;
if (v <= alpha) and MinDepth(alpha, minoptions) >= dmin then return true;
if (v > alpha) and (v < beta) and MaxDepth(v, maxoptions) >= dmax and
MinDepth(v, minoptions) >= dmin then return true;
return false;
EndProc;
Procedure MaxDepth(v, maxoptions)
for all lists in maxoptions
options = 0;
for all listelements
if (maxoptions[list][element] > (v - margin)) then options = options + 1
MaxDepth = MaxDepth + S(options);
return MaxDepth;
EndProc;
Procedure MinDepth(v, minoptions)
for all lists in minoptions
options = 0;
for all listelements
if (minoptions[list][element] < (v + margin)) then options = options + 1
MinDepth = MinDepth + S(options);
return MinDepth;
EndProc;
I hope someone can make sense of this. I'm especially uncertain if the MaxDepth
and MinDepth procedures are correct. Is it right that they are identical to the
minimax versions ?
--
GCP
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.