Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: singular extension

Author: Tony Werten

Date: 02:01:32 09/18/04

Go up one level in this thread


On September 18, 2004 at 03:56:48, Uri Blass wrote:

>On September 18, 2004 at 03:22:25, Tony Werten wrote:
>
>>On September 17, 2004 at 11:50:41, Robert Hyatt wrote:
>>
>>>On September 17, 2004 at 08:15:41, Tony Werten wrote:
>>>
>>>>On September 16, 2004 at 11:21:32, Robert Hyatt wrote:
>>>>
>>>>>On September 16, 2004 at 10:06:46, martin fierz wrote:
>>>>>
>>>>>>On September 16, 2004 at 09:53:14, Robert Hyatt wrote:
>>>>>>
>>>>>>[snip]
>>>>>>
>>>>>>ok,ok, i believe you. i just never saw anybody here saying it worked for them,
>>>>>>but i distinctly remembered people saying it didn't work for them.
>>>>>
>>>>>Correct both ways for me.  I reported more than once that it looked
>>>>>significantly better in Cray Blitz, but that tests with Crafty never produced
>>>>>results that looked better than crafty without SE.  I don't know whether the
>>>>>null-move R=3 stuff hurt the SE detection code or not, although I did speculate
>>>>>that it was possible since CB used null-move R=1, non-recursive, rather than the
>>>>>aggressive way we do it today...
>>>>>
>>>>>
>>>>>>
>>>>>>>Bruce has reported _lots_ of test data here in CCC.  Including ECM results with
>>>>>>>and without, etc...
>>>>>>
>>>>>>but... do you really believe a tactical test set like ECM is the right way to
>>>>>>test SE? and what about the question pham already posted:
>>>>>
>>>>>That wasn't the only tests.  Bruce mentioned several times that he noticed that
>>>>>with SE, the program played a bit more "steadily" in tactical positions, and
>>>>>that against programs without SE, he would usually be going along when "BAM"
>>>>>(his words) SE would find a deep tactic and end the game...
>>>>>
>>>>>He once said "this is a stupendous extension" although I believe that later he
>>>>>became "less than stupendous" when looking at real games rather than tactical
>>>>>test positions.  But he did use real game data, and went so far as to play lots
>>>>>of Crafty vs Ferret games where I turned book learning off and set the width to
>>>>>1 so he could play crafty the same opening with and without SE...
>>>>>
>>>>>
>>>>>>
>>>>>>in http://www.brucemo.com/compchess/programming/extensions.htm#singular
>>>>>>bruce wrote the stuff below in 2001 - not very enthusiastic about SE if you ask
>>>>>>me! i probably based my anti-SE-bias in part on this without remembering where i
>>>>>>had it from, i read bruce's pages a long time ago.
>>>>>>
>>>>>>cheers
>>>>>>  martin
>>>>>>
>>>>>>
>>>>>>"Singular extension
>>>>>>This extension is the search heuristic centerpiece of Deep Thought, the
>>>>>>strongest computer chess player of the 1980's, and precursor to Deep Blue.
>>>>>>
>>>>>>The idea is that if one move is significantly better than all of the other moves
>>>>>>(a singular move), it should be extended.
>>>>>>
>>>>>>This can be interpreted as a more general case of the recapture and single
>>>>>>response extensions.  It encompasses these, but also can be used in cases where
>>>>>>the singular move is not a recapture and where the side making the move isn't in
>>>>>>check.
>>>>>>
>>>>>>I don't know why it worked in DT, but it seems to me that this is a loss-seeking
>>>>>>extension.
>>>>>
>>>>>The question is, did he write that before or after _he_ chose to implement the
>>>>>"cheapo version" and then actually keep it in his program because it seemed to
>>>>
>>>>I'm not so sure the version Bruce is using is much different from the Deep Blue
>>>>version.
>>>>
>>>>In one of their papers they describe the parameters they sent to the hardware
>>>>search. One of them is "depth offset for singularity tests"
>>>>
>>>>That does sound like what Bruce described ( unless I'm missing something) It
>>>>also seems to indicate that DB didn't use them the last x ply (where x would be
>>>>6 in a few minutes search) wich makes sense from my own testing where testing to
>>>>close to horizon would blow up the search.
>>>
>>>It really is way different.  IE for PV-singular their test is much stronger than
>>>what Bruce and I were doing.  We did a very shallow search at the start of any
>>>new node, to see if one of the first moves tried would fail high.  If so, we
>>>searched the rest of the moves with a offset (lower) window to see if any of
>>>them would fail high.  This is weaker than the DB PV-singular test, similar to
>>>the DB FH-singular test. But then there are issues like the sticky transposition
>>>table, and all the work they did to exclude obvious singluar moves that don't
>>>deserve extensions in the "trivial" part of the search.  IE if I play BxN,
>>>re-capturing is pointless to extend, even though it is the only good move to
>>>play, in many circumstances.
>>>
>>>I implemented the full SE approach in Cray Blitz.  It took a _long_ time to get
>>>it right.  In fact, I reported in 1993 at the ACM event that I had a serious SE
>>>bug that could over-extend and run out the end of the search arrays.  The code
>>>that Bruce/I were using was very simple to write compared to the SE
>>>implementation I did for CB, following their paper very carefully.
>>>
>>>>
>>>>I used to have the same version of SE. It didn't cost much, but didn't gain much
>>>>either, except for a very few times. I always left it in, hoping for it to make
>>>>a difference in an important game and at least once it did.
>>>>
>>>
>>>
>>>
>>>"same version" == what?  IE for the DB approach it cost me about 2 plies of
>>>overall depth.  They reported something similar.  The version Bruce and I were
>>>playing with cost us about the same thing.  2 plies lopped off the search to see
>>>deeper tactics.  It didn't pay off very well for me, but seemed to be a little
>>>better than break-even for Bruce...
>>>
>>>The version Bruce/I were using added maybe 200 lines of code total.  The SE code
>>>in Cray Blitz was closer to 2500 lines of code total.
>>
>>I'm definately talking about a 200 lines version. With special care for pv
>>nodes, since they tend to blow up the search. The dualcredit system from DB
>>seems to take care of that quite nicely, reducing the extensions with an average
>>of 30%.
>>
>>My offset searches are not really shallow, just depth-2ply. Seems costly, but is
>>nescessairy for some addition. Only after this addition it seemed to slightly
>>more than break even for me.
>
>I wonder if you have a stable version at 40/40 time control.
>Xinix of 21.08.2004 was taken out of WBEC because of crashing again and again.

Nope. I was develloping on 2 computers. One win98 with Delphi 5, one winxp
Delphi 7. Somewhere it went seriously wrong.

I couldn't be bothered with searching for a correct version ( things already
went wrong before the crashing) and redo all my work since I thought it was time
for a rewrite anyway.

Tony

>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.