Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: About qsearch...

Author: Rafael Andrist

Date: 04:38:24 12/28/01

Go up one level in this thread


On December 28, 2001 at 06:14:32, Uri Blass wrote:

>On December 28, 2001 at 05:36:52, Rafael Andrist wrote:
>
>I understand the idea of the uncomplete flag but
>it is not clear to me that it is a good idea.
>
>If you set uncomplete flag only to repeat the search later then
>you may waste more nodes.

If you hash qsearch (of course only the completed subtrees of the qsearch), it
should not cost very much. But it heavily depends on how you implement it.

example:
inside the normal search you have a subtree with node S on its root. S has three
successor nodes: A,B and C.

if the qsearches of A and B are incomplete, but the one of C finishes causing a
beta cutoff, time and nodes were saved

unfortunately, it is very likely that if qsearch of A is incomplete, also the
qsearches of B and C will be incomplete because S allows too long capture
sequences. Now three things depending on implementation are possible:
1) igore it (what you suggest)
2) a) re-do the qsearches without limit (if you don't hash qsearch, this is
expensive)
   b) re-do the qsearches with a wider limit (iterative deepening for qsearch,
only recommendable when hashing q-nodes)
3) mark S as incomplete and re-search S again if it could be relevant for the
value of the whole tree. If you later need to do that, the infos in the hash
about the qsearches A-C will be surely lost. Another problem is that you don't
know which qsearches of A,B,C were incomplete.

2a is easy to implement and no relevant information is lost, but the possible
gain is also relative small. At least, this avoids the problem not seeing a mate
in 1 because of too long qsearches.
I don't know if 3 gains more than it loses, I would have to test it out.

>I know that limiting the qsearch was
>done also by other people but the main idea that
>I read here was limiting the qsearch based on the search
>depth(only 2x plies in the qsearch when
>the nominal depth is x and I see no reason to decide
> about the limit based on the search depth)

If I turn off all extensions, the reached maximal depth is usually twice the
nominal depth. The reason to limit qsearch in that way is probably based on such
observations.

>I believe that some commercial programs use similiar ideas
>(I guess that Genius and I guess that today also Tiger)
>but part of the commercial programs do not use it and
>the best free program with available source code(Crafty)
>does not limit the qsearch and I doubt
>if there is one free program that limit the qsearch based on
>the ideas that I suggested
>
>I mean:
>1)Doing the search more selective in the last plies
>of the search for example
>by not considering pawn captures,

SEE will avoid anyway most of them. But it is dangerous in the endgame.

>2)Having some limit to the sqearch based on the depth,
>3)using special evaluation(SEE) when
>there are captures but you cannot search deeper
>because of your limit(SEE is used in crafty only
>for pruning captures in the qsearch
>or for order of moves and not for evaluating
>and this is the reason that I did not use the word SEE
>because I did not do a connection between SEE and evaluation
>function).

This sounds like a good idea. Since static eval may have some problems with
pinned pieces, a detection of "simple" positions which can be evaluated
statically may be useful. In the other cases you could extend the qsearch limit
a bit.

>Uri

regards
Rafael B. Andrist



This page took 0.03 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.