Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DIEP and crafty versus crafty without nullmove

Author: Robert Hyatt

Date: 11:36:29 09/04/02

Go up one level in this thread


On September 04, 2002 at 10:50:48, Jeremiah Penery wrote:

>On September 04, 2002 at 10:10:34, Vincent Diepeveen wrote:
>
>>On September 03, 2002 at 17:03:16, Jeremiah Penery wrote:
>>
>>>Perhaps you missed some of the threads from a while back (a year or so).
>>>Vincent has claimed to get >2.0 speedup on 2 processors before.  I'm not sure
>>>why suddenly he changes this to 1.6 or whatever now.  Seems to me he makes up
>>>whatever numbers he wants to 'prove' his points, because obviously whatever he
>>>says becomes a proof.
>>
>>There are 2 'deep' diep versions
>>  a) diep version 1999-2002 (juli)
>>  b) diep optimized for NUMA
>>
>>In both categories there is a substantial
>>difference in speedup between several versions:
>>  a1) with dangerous extensions
>>  a2) without dangerous extensions
>>  a3) without dangerous extensions and with forward pruning
>>
>>a3 always has a > 2.0 speedup for simplistic reasons that it is
>>doing a dubious search. It is this version which had also a 4.0 speedup
>>in 1999 and which searched 20 ply in endgame in 1999 at 4 processors.
>
>So you're saying your serial search for that version sucks?
>
>>After that i have again experimented with forward pruning bigtime over
>>the years, but the version a2 is very close to a 2.0 speedup. If you
>>look to node counts of crafty you will see it also needs less nodes
>>than a single cpu search.
>
>I've never seen such a result.  Every time, more processors requires some more
>nodes to complete the same search.
>

Please ignore him.  I explained to vincent that displaying node counts in the
middle of a search is not easy to do.  But he did it anyway, but printing a
single instance of "tree->nodes_searched".  Unfortunately, that doesn't have
all the nodes that have been searched, they are scattered around in _other_
tree-> nodes_searched values.  And no, you can't just add 'em all up because
some might have already been added to the root count and others have not.

So vincent blindly prints out a number and says "aha, your node count for
two processors is lower than the node count for one, which is impossible
except for rare super-linear cases.  And he has "proofed" (sic) his point.

I did point out that the number he printed was always "low".  And told him
to run crafty with mt=2 in console mode and occasionally type a ".".  For
a good while the nodes searched won't change, even though you _know_ it is
searching.  Then, suddenly, the node counter will jump significantly as the
values get backed up and the nodes get collected and added.  He is making
his judgement on a value that has not been updated recently.  I told him
how to see this in the code, but do you think he is going to let a logical
explanation get in the way of his powers of "deduction"?  Not likely.  :)

So ignore the above rambling by him.  It is equivalent to millions of monkeys
in a room typing, and something that "almost" makes sense pops out.  But with
no basis in fact.

>>Bob denies it, but never shows node counts. What Bob needs to do is both
>>proof 1.7 speedup and at the same time show the node counts for every ply
>>at every depth while claiming that 1.7 speedup.

I "proofed" the 1.7 speedup multiple times here.  All that is needed is to
show the 1 cpu time, and the 2 cpu time, and divide the latter into the
former.  No need for node counts at all.  that is a red herring...

>
>Why do the node counts for each ply matter?  Is not the final node count enough?
> Just search some positions to 15 ply or so with 1 and 2 processors, and compare
>total node counts.

This is accounted for in the speedup, of course...  If the 2cpu test searches
zero extra nodes, then the speedup will be 2.0 exactly.  If it searches more
nodes, then the speedup drops off.  I think everyone here understands that,
except for you apparently...


>
>>Bob did do this for Cray Blitz obviously. However Cray Blitz used hardly
>>nullmove, so the chance that a branch gives a major cutoff in little nodes
>>is a lot less likely, whereas bob himself sees for crafty today already
>>a 2 ply increase using nullmove (i feel it is on average a LOT more
>>than 2 ply though for DIEP, because if i search with all extensions
>>turned on fullwidth i hardly get above 9 ply and only after billions of
>>nodes i get to 10 ply; only without dangerous extensions fullwidth a
>>ply or 10 is possible).
>
>What, again, does this have to do with parallel speedup?  Can you stick to the
>subject?

NO.  :)




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.