Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: Robert Hyatt

Date: 07:16:51 01/26/00

Go up one level in this thread


On January 26, 2000 at 01:15:54, Christophe Theron wrote:

>On January 26, 2000 at 00:36:26, Robert Hyatt wrote:
>
>>On January 25, 2000 at 21:38:51, Christophe Theron wrote:
>>
>>>On January 25, 2000 at 18:50:25, Ernst A. Heinz wrote:
>>>
>>>>On January 25, 2000 at 17:34:41, Robert Hyatt wrote:
>>>>>
>>>>> [...]
>>>>>
>>>>>>As far as I know, Hsu was always convinced that MVV/LVA with
>>>>>>futility pruning in the quiescence search was the right way
>>>>>>to go -- he already wrote about it in his Ph.D. thesis. Even
>>>>>>"ChipTest" did it exactly like this I suppose.
>>>>>>
>>>>>
>>>>>So far as I know, he didn't do futility pruning in the q-search.  When we had
>>>>>the original discussion in r.g.c, I mentioned that SEE was making my code in
>>>>>Crafty (and Cray Blitz since I did it the same way there) over 50% faster than
>>>>>pure MVV/LVA ordering.  He asked for details.  At that point, I realized that
>>>>>there were two distinct issues: (1) ordering moves with SEE vs MVV/LVA.  (2) I
>>>>>was doing a type of futility pruning (tossing out captures that were hopeless.)
>>>>>
>>>>>I then reworded my note and tried tests.  I first found that SEE was about 10%
>>>>>better than MVV/LVA, looking at the tree size.  And since SEE was pretty cheap
>>>>>in bitboards, overall it was faster as well.  I then found that tossing bum
>>>>>captures was a 50% gain.  He thought that would cause problems.  We had a long
>>>>>discussion, with several test positions.
>>>>>
>>>>>I can certainly be wrong about whether he used it or not, but he certainly
>>>>>said that he was looking at _all_ captures at the time of the discussion which
>>>>>was somewhere around 1993-1994...
>>>>>
>>>>>So his doing some sort of futility tossing was a surprise to me...  I didn't
>>>>>notice this in his thesis as I was more interested in the parallel search
>>>>>stuff.
>>>>
>>>>I just reread the passages that I referred to in his Ph.D. thesis. They
>>>>are on pages 18 and 19. There he definitely mentions and explains the
>>>>mechanism, possibility, and power of futility cutoffs in the quiescence
>>>>search.
>>>>
>>>>However, he does not explicitly state whether he actually used them. So
>>>>this was only my intutive assumption. But he might have introduced them
>>>>only later as you say -- the thesis text does not really clarify the
>>>>issue of when and how exactly the futility cutoffs where introduced in
>>>>his hardware designs.
>>>>
>>>>>Not even the branching factor?
>>>>
>>>>As Christophe already remarked, the effective branching factor visible
>>>>from the log times that you posted is *** REALLY *** surprising.
>>>>
>>>>Some possible explanations that I deem reasonable follow below.
>>>>
>>>>  1.  The search of DB-2 actually performed some slight forward
>>>>      pruning (e.g. normal futility pruning at frontier nodes with
>>>>      a conservative cutoff margin).
>>>>
>>>>  2.  Enhanced transposition-table cutoffs worked really well for
>>>>      the search of DB-2 in this respect.
>>>>
>>>>  3.  The logs on the WWW pages of IBM contain garbage.
>>>>
>>>>There are surely more such explanations -- please add them to the
>>>>list such that we can try to figure out what is going on here!
>>>>
>>>>=Ernst=
>>>
>>>
>>>Futility or extended futility pruning does not change the branching factor of
>>>deep searches. I'm surprised that you and Bob keep focusing on this. It has
>>>absolutely no importance.
>>>
>>>QSearch is, as well, irrelevant when we talk about BF.
>>>
>>>
>>>I add:
>>>
>>>4. the sample of time to reach a given depth that Bob has found in the logs is
>>>not representative.
>>>
>>>I think somebody should seriously take the time to extract more BF data from the
>>>logs.
>>>
>>>This discussion is really interesting.
>>>
>>>
>>>    Christophe
>>
>>
>>My sample was random but not scientific.  I took log one, and just picked
>>a few positions where they did searches from 6 to 11 plies or so.  None
>>were consecutive I don't think, although you can obviously find my time values
>>in their log to see what I chose...  No attempt to pick optimum values either,
>>just purely random ones because I wasn't sure what I was seeing.
>
>
>I did not imply you had on purpose chosen good looking ones.
>
>I hope somebody tries to enter this data into a spreadsheet and gives us some
>data...
>
>
>
>
>>It _is_ possible they were doing something they haven't revealed.  yet.  I
>>intend to study their games as much as I can find time for...
>
>
>Yes, I think you cannot have such a good branching factor with just alphabeta
>and expensive SE.
>
>Non-recursive null move wouldn't provide this either. It must be some recursive
>pruning scheme.
>
>IF the BF turns out to be really so good.
>
>
>
>    Christophe


Also note that the times I published were the "end-of-iteration" times as best
I could tell.  They need to be 'netted' to get the real branching factor, ie
subtract 6 from 7 to get time to do 7, etc...

I haven't looked at how that would affect the results, if it does...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.