Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: SEE for forward pruning in Q. Search

Author: Vincent Diepeveen

Date: 03:41:24 08/09/99

Go up one level in this thread


On August 08, 1999 at 21:35:46, Robert Hyatt wrote:

>On August 08, 1999 at 20:00:45, Vincent Diepeveen wrote:
>
>>On August 07, 1999 at 17:58:55, Robert Hyatt wrote:
>>
>>>On August 07, 1999 at 08:17:49, Vincent Diepeveen wrote:
>>>
>>>>On August 05, 1999 at 22:43:29, Robert Hyatt wrote:
>>>>
>>>>>On August 05, 1999 at 17:13:28, Tom King wrote:
>>>>>
>>>>>>On August 04, 1999 at 20:00:49, Robert Hyatt wrote:
>>>>>>
>>>>>>[snip]
>>>>>>>
>>>>>>>
>>>>>>>I find the following:
>>>>>>>
>>>>>>>using SEE to order captures in the q-search, without eliminating any, will
>>>>>>>shrink the tree about 10% over using something simple like MVV/LVA.  But the
>>>>>>>SEE code will likely cost you more than 10% (unless you are a bitmap program
>>>>>>>where this can be done fairly efficiently).
>>>>>>>
>>>>>>>using SEE to eliminate losing captures can speed you up another 50%, or a factor
>>>>>>>of two, which is very significant.  And no matter how slow your SEE code is,
>>>>>>>that become a 'winner' of an idea.
>>>>>>
>>>>>>I'm seeing a big speedup - it's just the (possible) loss of accuracy which
>>>>>>concerns me. Having said that, my Q search is pretty "quick and nasty" anyway,
>>>>>>although I do still do things like probe the hash tables.
>>>>>
>>>>>
>>>>>This is only my opinion, but I spend my time working on the full-width part of
>>>>>the search (extensions, etc.).  The q-search already has _so many_ errors in it
>>>>>(it is highly selective since throwing out everything but captures is a drastic
>>>>>step, of course) that I don't trust it at all.  I just want it to handle simple
>>>>>hung pieces and not much else...  I'll trust my extensions to find the deep
>>>>>tactical tricks since then I won't be overlooking pins, forks, skewers, etc.
>>>>>
>>>>>When you think about it like that, shrink the q-search and use those nodes in
>>>>>places where they are more useful.
>>>>>
>>>>>Just an opinion, of course...
>>>>
>>>>Right, my opinion is different. A good qsearch will give more accurate
>>>>scores for leafs, so in a set of leafs X, for all leafs x in X we will have
>>>>a more reliable score.
>>>>
>>>>So whatever plydepth we get, we will get a positional more trustworthy score,
>>>>which with backtracking will result in a better and more reliable score.
>>>>
>>>>Greetings,
>>>>Vincent
>>>
>>>
>>>Most searches have three parts:  (1)full width to depth=N; (2) highly selective
>>>until a terminal position is reached;  (3) static evaluation.
>>>
>>>I tend to put my faith in (1)... you want to put yours in (2).  I think it is
>>>nothing more than a tradeoff...  If your (2) is a lot bigger than mine (and it
>>>is if you do checks/out-of-check moves there) then your (1) is smaller than
>>>mine.  In some positions, you will do better.  In others, I will do better.
>>>
>>>The real question is, in real games, which works better?  The jury is still
>>>out, but not doing checks is not getting me killed...  I did checks in the
>>>q-search many versions ago.  I'm going to test them again pretty soon.  But
>>>the real question is which plays better?
>>>
>>>I don't think that is anywhere near a "resolved issue"...
>>
>>We agree here that this is not a resolved issue, as so many things
>>depend upon it.
>>
>>The only thing that is sure is that pruning on alpha in the qsearch is
>>dubious (assuming you have a static evaluation which can produce some
>>real scores).
>>
>>The only thing we can do to reduce overhead is pruning on beta.
>>
>>so basically:
>>  if( !check )  {
>>    if( score >= beta )
>>      return score;
>>    else for all interesting moves:
>>      try move.
>>  }
>>  else {
>>    for all moves:
>>      try move.
>>  }
>>
>>Now what i personally think is that
>>the influence of TRYING a move in qsearch is huge
>>on the hashtable.
>>
>
>
>Note that it isn't a factor for me.. I don't store any q-search nodes in the
>hash table at all...

I do, but i didn't mean that. I'm talking about
mainsearch plyleft = 1,2,3,4
where score gets returned from qsearch after doing a nullmove.

>
>
>>It has of course great relation with evaluation.
>>Suppose evaluation evaluates pieces that can be captured, so
>>evaluation more or less evaluates using SEE.
>>
>>That definitely removes the need for the number of
>>tried captures.
>>
>>Just exchanging is simply not needed.
>>
>>In diep i'm not having recapture extensions, but i do
>>capture usual equal pieces, even when there is big suspicion
>>that it can be captured back by the other side.
>>
>>I prefer to do as much as possible within mainsearch, as
>>i can use there my hashtables better. Therefore i limit
>>the number of moves tried in qsearch drastically.
>>
>
>my feeling exactly, so far...
>
>
>>First thing to do is introduce 2 new node counts;
>>node counts that measures number of nodes done in
>>
>>  a) qsearch
>>  b) brute force part
>>
>>Some 5 years ago when the first versions of DIEP
>>were made i already figured out the more moves i
>>tried in a, the less nodes i needed in b.
>>
>>So first word i thought of: "trade-off".
>>
>>However total node count could drastically increase
>>at smaller depths when doing more moves in a. At
>>bigger depths influence was harder to define, as
>>branching factor got slightly better.
>>
>>Later analysis were that some real silly moves in
>>qsearch, like some silly checks and queen captures
>>covered pawns, wasted bunches of nodes.
>>
>>So some knowledge in move selection
>>improved this, still keeping the good branching factor.
>>
>>Yet this 'trade-off' we will keep of course.
>>
>>It's very important to realize *what* nodes in b
>>get influenced by a.
>>
>>I'm using nullmove R=3. So this means that i nullmove
>>last 4 ply using the qsearch.
>
>
>I've been doing a 'hybrid' for nearly a year...  R=3 fairly close to the
>root, R=2 beyond some threshold that I don't remember what it is...
>
>
>
>>
>>In fact this way of nullmoving is only detecting whether
>>there are threats against the position. The better i see
>>the threats in qsearch, the more reliable nullmove will be
>>the last 4 ply. It's not difficult to imagine a position
>>where just using qsearch to nullmove misses positional things.
>>
>>Of course you search so much deeper because of nullmove that you
>>identify those positions anyway, but then the unreliable score here
>>causes the hashtables to get bad scores. Now those scores won't
>>be backtracked to the root. They will however influence the
>>search tree considerable in move choices next iteration.
>>
>>Now every iteration there are exponentially more nodes that
>>have this phenomena.
>>
>>So in short a better qsearch gives my program in big amounts
>>of positions a more reliable score which
>>takes care for a smaller branching factor.
>>
>>However there are clearly tradeoffs, you cannot do *all checks*.
>>You cannot do *all captures*. You cannot do *all* interesting moves.
>>
>>Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.