Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Singular Extensions, Nullmove deepsearch

Author: Robert Hyatt

Date: 18:56:32 02/18/99

Go up one level in this thread


On February 18, 1999 at 20:38:06, Vincent Diepeveen wrote:

>On February 18, 1999 at 17:13:12, Robert Hyatt wrote:
>
>>On February 17, 1999 at 13:55:27, Vincent Diepeveen wrote:
>>
>>>On February 16, 1999 at 04:45:43, Frank Schneider wrote:
>>>
>>>>I experimented a little bit with nullmove deepsearch extensions and with
>>>>singular extensions and now I have some questions:
>>>>
>>>>1. In all my tests the searchtree (#nodes to finish n iterations) increased
>>>>   by a factor of 1.5 to 4. Although testresults were interesting I believe
>>>>   this is too much. What are your experiences?
>>>>
>>>>2. Is it correct to assume that singular extensions work best if there are
>>>>   only few other extension heuristics.
>>>>
>>>>3. How would you describe the difference between nullmove deepsearch extensions
>>>>  (Donninger) and the nullmove threat detection by Deep Thought?
>>>>
>>>>4. Who uses
>>>>   - singular extensions
>>>no
>>>>   - nullmove deepsearch extensions
>>>no
>>>>   - nullmove threat detection
>>>
>>>yes, if there is a mate threat i sometimes extend.
>>>
>>>>5. What is the best (easiest) way to implement singular extensions?
>>>
>>>Let's see 3 cases of SE implementation in a position where i
>>>have a score for a move m, which is the best move and i
>>>want to see whether it's best.
>>>
>>>  a) your score s is <= alfa
>>>  b) your score s is > alfa and < beta
>>>  c) your score s is >= beta
>>>
>>>Easiest case is b)
>>>
>>>Let's see that first, for searching with a singular difference S:
>>>
>>>s is a true score, so you can search ALL OTHER MOVES with a window
>>>
>>>[alfa,beta] = [s-S,s]
>>>
>>>If all scores are <= s-S then your move is singular in case b.
>>>
>>>Now case a. We get back a score s. We know that to be singular all moves n
>>>which are unequal to m must have a lower score.
>>>
>>>There is however one big problem: score s is a bound. It is not
>>>a true score, so in fact s could be theoretical s-i where i > 0.
>>>
>>>Practical with nullmove and a q-search that doesn't search all moves using
>>>minimax, but prunes at >= beta, which as far as i know all programs do,
>>>s is likely to be not the true score of m.
>>>
>>>So we must first get the TRUE score of m before deciding what its score is.
>>>We then need to research m again. Eeasiest is to research with
>>>[-infinite,s]
>>>
>>>If we get a fail high then we can assume that s is the true score.
>>>If we  don't get a fail high then the new score s' is the true score of
>>>move m.
>>>
>>>Now it will be obvious that researching with -infinite is rather silly.
>>>
>>>It is in fact doing a minimax search, as you remove your bound and
>>>research this move with the initialy limits.
>>>
>>>So it is not very wise to do that at the same depth as you initially did.
>>>It is wiser to research using a reduction factor for the remaining depth you
>>>search with.
>>>
>>>Then it's obvious to see that the overhead of singular extensions is
>>>depending upon your branching factor. The better your branching factor,
>>>the bigger the overhead.
>>>
>>>This insight is easy to get:
>>>
>>>Suppose your branching factor is 5. then if you reduce a depth d with
>>>2 ply, then that tree is more or less 1/5*5 = 1/25 part of the tree at
>>>depth d.
>>>
>>>If your branching factor is say 2.5 to 3, then it is way more than 1/9
>>>extra overhead, just for this move.
>>>
>>>Case c goes like case a, just that you need to search initially with
>>>[s,infinite] instead of [-infinite,s].
>>>
>>>Let's take an easy example of a position where humans see directly the
>>>problems and programs have more problems:
>>>
>>>r1b1k2r/p2n1ppp/1p2p3/3p2B1/P2P4/1Nn1P3/5PPP/R3KB1R w KQkq -
>>>
>>> r = b = k = - r   ...       1    ...
>>> o - = n = o o o   ...       2    ...
>>> - o - = o = - =   ...       3    ...
>>> = - = o = - B -   ...       4    ...
>>> O = - O - = - =   ...       5    ...
>>> = N n - O - = -   ...       6    ...
>>> - = - = - O O O   ...       7    ...
>>> R - = - K B = R   ...       8    ...
>>>position deriving from an analysis of Eric de Haan and Vincent Diepeveen
>>>white to move
>>>
>>>Here i saw directly that white could win the piece at c3 after f3, and then
>>>put pressure at c3 with bd3 kd2 and rhc1.
>>>
>>>That's simply an idea. so when i start searching i start with
>>>
>>>f3,Bb7,Bd3,Kd2 and i see that the piece gets captured after Rhc1. Now
>>>that is not a computer analysis.
>>>
>>>For the computer to see the problem is depending upon evaluation.
>>>Let's take the variation without nullmove and without hung pieces
>>>evaluation and without recapture extension:
>>>      normal depth
>>>f3       1
>>>h6       2
>>>Bh4      3
>>>g5       4
>>>Bf2      5
>>>Bb7      6
>>>Bd3      7
>>>Rc8      8
>>>Kd2      9
>>>e5       10 *
>>>Rhc1     11
>>>exd4     12 *
>>>exd4     13
>>>something 14
>>>Rxc3      q-search
>>>
>>>Now a human sees that h6-g5 is not smart, and that g5 is forced
>>>after h6, although for a computer this can be exchanged for Rc8 too.
>>>
>>>The singular extensions in THIS line are without a smart evaluation
>>>(so material considerations) marked with a star, so: e5 and exd4.
>>>
>>>Questionable is what happens after
>>>f3       1
>>>h6       2
>>>Bh4      3
>>>g5       4
>>>Bg3      5
>>>Bb7      6
>>>Bd3      7
>>>Rc8      8
>>>Kd2      9
>>>h5       10
>>>Rhc1     11
>>>something 12
>>>Rxc3      q-search
>>>
>>>Now we see that Bg3 is way better than Bf2, and therefore should get
>>>objectively a singular extension. However we only see that it is better
>>>AFTER we already have found it to be winning.
>>>
>>>Now we know how very little ply we need to find this for humans easy case
>>>of getting a hung piece, we are gonna search for another way to delay
>>>this combi. Even if the g5 bishop gets via h4 to g3, then still we can
>>>delay things with f6 and e5 and capturing at d4.
>>>
>>>So the line now becomes:
>>>f3       1
>>>f6       2
>>>Bh4      3
>>>g5       4
>>>Bg3      5
>>>Bb7      6
>>>Bd3      7
>>>Rc8      8
>>>Kd2      9
>>>e5       10
>>>Rhc1     11
>>>exd4     12
>>>exd4     13
>>>something 14
>>>Rxc3      q-search
>>>
>>>Now we clearly see that f6 is a clear case of a singular move here,
>>>as it is the only move that delay the loss.
>>>
>>>I think this is a problem that is very important to realize,
>>>but let's go on and take the nullmove program at the short line:
>>>
>>>f3       1
>>>h6       2
>>>Bh4      3
>>>g5       4
>>>Bg3      5
>>>Bb7      6
>>>Bd3      7
>>>Rc8      8
>>>Kd2      9
>>>nullmove 10
>>>Rhc1     11+R
>>>something 12+R
>>>Rxc3     q-search
>>>
>>>Now that's *without* any extensions yet.
>>>
>>>Now let's take cray blitz. It does detect for threats in nullmove.
>>>And R=1 for the nullmove reduction factor. R=2 for the singular extension
>>>reduction factor. Further it has recapture extensions.
>>>So it sees already after Rhc1 the problems because
>>>of the q-search nullmove. That is 12 ply, even without singular extensions.
>>>
>>>In this line however Bg3 and Bb7 should be singular extended!
>>>So actually it should find it at 10 ply, if everything would be normal.
>>
>>I ran this test and found two things.  First, from ply=2 on, CB likes f3
>>and would _never_ play anything else.  from plies 2-8 the score climbed
>>slowly but white is a pawn down so it was less than zero.  At ply=9 it got a
>>fail high, but no PV back so the score was -.117 at that point.  At ply=10 it
>>failed high again with an evaluation of +1.5.
>
>Is that a new version of Cray Blitz?
>

that is a good question.  I run at two places.  At our supercomputer center
here in Alabama, I have the most recent version, dated 1994, which was the
version where we added singular extensions before the Cape May ACM event that
year.  At Livermore, where I run on occasion as well, Harry has the older
version there with no SE.  I have no idea which I used when I tested, although
12 plies sounds like no SE and null-move R=1 which would match the '47*'
versions (we numbered major changes and appended a letter for 'tweaks'.  48*
had SE implemented.



>Some months ago when i messaged you this position and asked to have
>a look at it with Cray Blitz to see how SE works, you messaged me back
>that it needed 12 ply for it to see that it was winning with white with f3.
>
>You used 1 processor for it.

correct.  I suspect I ran at Livermore.  But without 'thinking' I probably
forgot about v47 vs 48...



I should also run it on the SE version of crafty being tested...



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.