Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Q&A with Feng-Hsiung Hsu

Author: Vincent Diepeveen

Date: 07:01:47 10/14/02

Go up one level in this thread


On October 14, 2002 at 08:59:37, James Swafford wrote:

>On October 14, 2002 at 06:34:43, Vincent Diepeveen wrote:
>
>>On October 14, 2002 at 04:29:41, Daniel Clausen wrote:
>>
>>>On October 13, 2002 at 22:48:10, Jeremiah Penery wrote:
>>>
>>>>On October 13, 2002 at 21:40:42, Robert Hyatt wrote:
>>>>
>>>><snip>
>>>>
>>>>>You are _totally_ wasting your breath...
>>>>
>>>>I don't mind too much wasting my breath, as long as some decent discussion comes
>>>>from it.  :)
>>>
>>>As if that ever happened on this board when the subject was related to DB. ;)
>>>
>>>Sargon
>>
>>the marketing hype created by IBM is so big that we'll never end
>>talking about it, like they talked for well over 100 years about
>>The Turk automata that won from Napoleon.
>>
>>it's pretty weird to see people argument that the thing searched 18
>>ply fullwidth based upon some mainlines, despite statements and
>>theoretical impossibilities to do so :)
>
>Please defend that statment.  Why is it theoretically impossible
>to search 18 ply full width?  Doing some back of the envelope
>calculations, I get
>     4.0^18 = 68.7 B nodes
>     3.9^18 = 43.6 B nodes
>     3.8^18 = 27.3 B nodes.

No now the best measured branching factor fullwidth WITh hashtables
and NO singular extensions was 4.2 and it was measured by me.

In 1998 i was nearly killed that i claimed 4.2 for fullwidth
and that with nullmove it would get under 4.0 up to 3.5 perhaps
even.

However if you realize that deep blue didn't do hashtables it is
squareroot from the branching factor.

deep blue didn't have simple games and didn't have hashtables in
processors. If the stupid assumption is 6 ply in hardware then the
vaste majority of nodes is not using hashtable.

So knuth's lemma is true then:

number of nodes = squareroot( 40^18 ) = 262144000000000

And now try to get to that number with something that's 126MLN
nodes a second and from that only like 1 to 10% effective.

Even if we believe the latest statement from the DB team that they
had 10% speedup from their parallellism it's 12.6 MLN.

Also please consider they did do checks in qsearch ==> huge overhead.

they have huge overhead in singular extensions. Ever implemented them?
the overhead is *huge* and if you do not use nullmove, only some
forward pruning in last plies of hardware, then you simply *never*
reduce the depth.

So if you look at nonsense it extends and extends and extends.

Best compare is with the program called Schach 3.0

in 1997 this was the fastest searching pc program on this planet (on a 486).

Even at a P5-133 it is impressive with nearly a quarter of a million
nodes a second.

Yet if i let it search at a K7 1.6Ghz (it is a 16 bits program so
not that fast on it) after a day of search it still doesn't get beyond
10 ply.

Schach WAS using recursive nullmove. and it is using singular extensions.

So let me ask you: why does schach with 32MB hashtables never get above
11 ply?

That's *with* nullmove.

>(27.3 B nodes) / (.126 B/Sec) == 217 seconds < 4 minutes.


>What's impossible about a bf of 3.8 and a search of 217 seconds?

fullwidth 3.8 is not possible. Please measure yourself. Don't use a
hashtable last 6 plies please.

You will get like schach branching factors of up to 10.

Then you still do not use singular extensions.

Look the whole problem is i have posted a bunch of outputs here of diep
with hashtable and with singular extensions and without nullmove.

I need insane number of nodes to get above 10 ply.

Now they do not have hashtable in hardware and they search without
killermoves even.

I asked here who wanted to run that diep version at some positions.

I got no response.

I still have that diep version. Are you interested in running it at some
positions?

You may modify crafty too if you want to, but first do checks in qsearch
then and add singular extensions too (there is versions of crafty with
singular extensions).

You will get horrified soon.

Knuth is not far off the truth.

>Note Hsu didn't claim 18 ply _every_ search.  He said 12 ply
>and up to another 6.

no he didn't say 'another 6 ply'.

He said 12 ply in total.

>--
>James
>
>
>>
>>Amazingly no one ever talks about shredder here. Shredder always shows
>>longer mainlines. Some years ago i had a selective search in diep which
>>checked the principal variation of diep further.
>>
>>In the end i threw it out.
>>
>>Now suppose you have 480 processors idling, i'm so amazed no one can
>>understand that in order to get more nodes a second, the only
>>important thing, even the chat yesterday Hsu was only talking
>>about nodes a second NOT about search depths, it is important to
>>give them jobs.
>>
>>So splitting a position at the end of the pv 1 deeper is not so stupid
>>here. The rest is from hashtable and extensions.
>>
>>The only interesting question this Jeremiah Penery guy should ask himself
>>is: "WHAT WAS IBM BUSY DOING?"
>>
>>Answer: getting as many nodes a second as possible against kasparov
>>
>>Now how do you get as many as possible CPUs to work in order to
>>get more nodes a second, with just a small search depth?
>>
>>All we know is that even at 11 ply search depths they didn't manage
>>to get the full potential of the cpu's. In fact 126 MLN nodes a second
>>is a lot less than 480 x 2.25 MLN nodes a second = 1.08 BLN
>>
>>126 MLN nodes a second is 11.7% from that.
>>
>>That's basically based upon the last seconds of the 3 minute search.
>>
>>the first few seconds not many processors had a job out of 480.
>>
>>So what i do then is to already let them split mainline second ply
>>after root. I put a bunch of processors there, despite possibly
>>getting a different alfabeta score.
>>
>>For a 2 processor setup that's horrible for the speedup (gives a
>>very bad speedup). For 480 processors it's great, getting them
>>busy is very important!
>>
>>In fact we see from the deepblue paper in 2001 that it was already
>>taking processors from a search job if it took a bit too long to
>>search it! Then it resplitted and added more cpu's. That automatically
>>means that you get a longer PV.
>>
>>Best regards,
>>Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.