Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Q&A with Feng-Hsiung Hsu

Author: Jeremiah Penery

Date: 06:40:21 10/16/02

Go up one level in this thread


On October 16, 2002 at 07:31:01, Vincent Diepeveen wrote:

>On October 15, 2002 at 13:31:07, James Swafford wrote:
>
>There is 2 things
>
>  a) theoretic branching factor without nullmove
>
>that's true for the last so many plies of deep blue and
>the theoretic truth.
>
>  b) branching factor with hashtable.
>
>You will see that deep blue is somewhere, like all software programs
>in the middle of this.
>
>126 mln nodes a second. And it got like 12 ply with a bit more
>than 3 minutes.
>
>that's like total branching factor =  12th root from
>126 mln * 180 seconds = 6.0

If you mean (180*126m)^(1/12), that equals 7.294.

>So the real overall b.f. which was achieved was 6.0
>
>that's lower than the theoretic branching factor of sqrt(40^18)

If a decent program didn't have a lower number than the 'theoretic branching
factor...' I'd be very surprised.

BUT, if you insist on this number, and you think they only got 12 plies, they
need to search 4096000000 nodes for 12 plies.  They can accomplish that in 32.5
seconds with 126M n/s.  12.2 plies takes 47 seconds.  In 180 seconds, they can
do 12.93 plies, in fact.  sqrt(40) is a branching factor of 6.32.  With a
branching factor of 6, they can reach 13.3 plies in 180 seconds based upon your
formula.

Are you going to claim now their branching factor was above 7 just so that it
can fit your formula, when you can calculate it directly from the logfiles and
find out it was less even than 6?

>Reality is that deep blue's b.f. is looking worse than it is because of
>2 things
>  a) the incredible overhead from hardware
>  b) the huge inefficiency from parallel search, estimations of 10% out of
>     the total nodes of 126MLN a second which it got on average is not
>     very realistic still IMHO.

You're completely misinterpreting their data.  They give estimation of around
10% total efficiency (8%/12% in tactical/non-tactical positions) on the 480
processor machine.  This means they were getting average speedup of 48 over a
single-processor machine.  We can tell this because they compare to numbers on
the 24-processor machine:  "For positions with many deep forcing sequences
speedups averaged about 7, for an observed efficiency of about 30%.  For quieter
positions, speedups averaged 18, for an observed efficiency of 75%."
As you can see, 7/24 = 29.2%, and 18/24 = 75%.  With a total average speedup of
about 48 on the big machine, you can calculate that their serial node rate would
have been a little over 100M (2.2m x 48).  Amazingly, this is about the same as
the theoretical peak (1000M) times the same 10% efficiency mentioned.

In case you want to recalculate the branching factors based on 100m instead of
126m nodes/sec, 6.93^12.2 / 100m is 180.69 seconds.  Still way too high of a
branching factor.  Using 6, which is still too high, you get a little over 13
plies at that node rate in 3 minutes.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.