Computer Chess Club Archives


Search

Terms

Messages

Subject: A plausible explanation (?)

Author: Jeremiah Penery

Date: 17:42:44 01/31/02

Go up one level in this thread


On January 30, 2002 at 09:43:49, Vincent Diepeveen wrote:

>a) 12(6) doesn't mean they searched 18 ply. It means they searched
>   12 plies fullwidth from which 6 ply in hardware

AFAIK, they couldn't get a PV from the hardware.  How, then, do you explain
searches that begin like this:

Depth		Score		Time		PV
 5(5)[Bg5](-9) -9  T=0
Be7g5 qc1d2 Pa5a4 qd2c3 Bg5e3n qc3e3B

According to your words, they're doing a 5-ply search, and their software isn't
searching at all!  What do you think their software was doing?
Amazingly enough, they have a 6-ply PV to go with this "hardware only" search,
even though they can't get a PV from their hardware (and it is not the same PV
that came from the last iteration of the previous move).

>Let's assume Bob as only defender from IBM is right,
>let's do some math on the MINIMUM number of nodes
>they would need for a 18 ply search.

I don't think they did an 18-ply nominal search.  I think they did 12 in
software, like Bob says, and the chess hardware was used more as an extra degree
of evaluation (which of course did a small search of its own) for the
"interesting" terminal nodes of the software search.  For the PV and other
"interesting" lines, I think it would be fair to say that the machine did search
18 plies.  But I don't think you can say "it searched 18 plies" with no
qualification.

>For a 18 ply fullwidth search: squareroot (40^18) = 40^9
> = 262,144,000,000,000

Your math here is fine - I just don't think your assumptions about the machine
are correct.  First of all, we can see that their branching factor was not
sqrt(40) from their logs.  sqrt(20) would be a much closer approximation.  This
would make the total number of nodes 512,000,000,000, or 2560 seconds at 200M
NPS.  This is near to within a factor of 10 now.  If you take sqrt(16) (which
more closely agrees with their logs), you get 68,719,476,736 nodes, which only
takes 343.5 seconds.  Still not quite there, by a factor of almost 2.  The
problem is that this math is for an 18-ply nominal search, which I don't think
is what they were doing.

The above simply isn't the right way to calculate the size of their search tree
per depth, since they use a 3-tiered search.  You can't simply take the raw
speed of the chess processors and apply that speed to the whole tree, at its
entire depth, to get a right answer.  IMO, you must break the search into two
parts, software and hardware, and calculate them separately to get a correct
estimate.  I'm going to make at least a couple of assumptions I find reasonable
- I could very well be wrong, and I hope that someone will point out if I make
huge mathematical or logical blunders here.

First, assume that the software had a throughput of 1,000,000 NPS - they had a
32-processor RS/6000 SP system, maybe their software was even faster - I choose
this number for simplicity, and for the fact that it seems pretty reasonable.

Second, they didn't pass every terminal position to the chess processors.  This
would be practically impossible with the billions of terminal nodes.  We know
that each chess processor did about 2M NPS.  So using a branching factor of
sqrt(20) for the hardware, doing a 6-ply search, we get 8000 nodes/search.  Each
chess processor could do 250 such searches in one second.  With 480 chess
processors, they could process 120k 6-ply searches/second.  If they used 1/3 of
their time in the hardware search (60 seconds/move), that gives 7.2M 6-ply
searches done. (In 10 seconds, they can process 5.4M 5-ply searches in this way
- in 2 seconds, 4.8M 4-ply searches, etc.)  Since we know they couldn't have
passed all positions to the chess processors, I'm going to assume they only
passed "interesting" positions to them.  PV nodes, "unstable" nodes (they
supposedly had a way to detect stuff like this), and I suppose nodes that are
near in score to the PV (or something similar).  All these "interesting"
endpoint nodes which are being passed to the chess processors are likely less
than 1/5000 of the total nodes.  The rest are pretty much junk nodes which are
useless to search in hardware.

So now we have this:  software searching to 12 plies, then passing some selected
positions to chess processors to search 6 more plies.  How long does the
software search take?  Let's see:

sqrt(20)^12 = 64,000,000.  This would take all of 64 seconds at the assumed
rate.  They can pass a full 1/10th of their search tree to the chess processors,
which is surely far more than needed, if they want to wait 60 seconds for the
result.  If they only want to wait 10 seconds, they can still pass almost 1/50th
of the total terminal nodes to the chess processors.  You can even say their
software branching factor was sqrt(23), making their tree sqrt(23)^12 =
148,035,889 - more than twice as big!  This takes 148 seconds - Then if they
pass 1/100 of the positions on, they wait a whopping 12.3 seconds for the
result.  160 seconds total for the 12 ply search plus hardware extensions for
"interesting" nodes.  Give or take a bit, their logs roughly agree with these
numbers for the total timings.


>18 ply fullwidth without hashtable is unreachable CCC audience, with

You're right - 18-ply fullwidth with or without hashtables is unreachable even
with 200M NPS.

The rest of what Vincent wrote doesn't have much to do with the actual
discussion at hand, so it was snipped.


I hope I have shown a plausible explanation for the search depth numbers shown
in DB logs.  Please let me know what you think.

Regards,
Jeremiah



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.