Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What is the public's opinion about the result of a match between DB and

Author: Robert Hyatt

Date: 17:38:49 04/24/01

Go up one level in this thread


On April 24, 2001 at 15:12:03, Christophe Theron wrote:

>On April 24, 2001 at 10:13:29, Albert Silver wrote:
>
>>On April 24, 2001 at 10:01:04, Robert Hyatt wrote:
>>
>>>On April 24, 2001 at 05:06:40, Amir Ban wrote:
>>>
>>>>On April 24, 2001 at 03:47:15, Uri Blass wrote:
>>>>
>>>>>the best software that is not IBM.
>>>>>
>>>>>Suppose there is a match of 20 games at tournament time control
>>>>>
>>>>>I am interested to know how many people expect 20-0 for IBM
>>>>>How many people expect 19.5-.5?....
>>>>>
>>>>>If IBM expect to do better result then the average result that the public expect
>>>>>then they can earn something from playing a match of 20 games with Deep Blue.
>>>>>
>>>>>I believe that a part of the public who read the claim that kasparov played like
>>>>>an IM are not going to expect good result for IBM.
>>>>>
>>>>>Uri
>>>>
>>>>I expect DB ('97 version) to lose 8-12.
>>>>
>>>>Amir
>>>
>>>
>>>Based on what specific facts?  How many games did they lose from their debut
>>>in 1987 through 1995 the last event they played in with other computers?  Let
>>>me see.. that would be... _one_ game.  So what suggests they would do worse
>>>today?  we are all 100x slower (or more).
>>
>>Yes, and another thing that is being seriously overlooked is just how important
>>speed and a ply make in comp-comp matches. One thing that time and SSDF has
>>CLEARLY taught is that that one ply in a comp-comp match makes a world of
>>difference. I think pitting a PC program against DB would be a massacre, even if
>>I don't think humans (a very top GM) would do that much worse against DB
>>(compared to DB vs. PC) as opposed to an 8-way server run PC program as will be
>>the case here. Provided the conditions were the same, and that both matches had
>>equal preparation of course.
>>
>>                                           Albert
>
>
>
>I'm not sure I would agree with you.
>
>Yes, Deep Blue is way faster than PC programs (even on today's PCs) in NPS, but
>there is something you should not forget.
>
>Due to Hsu's beliefs, as pointed out by Bob several times, Deep Blue is
>essentially a brute force searcher.
>
>But after 3 decades of chess programming on microcomputers we all know that
>brute force search is extremely inefficient.
>
>Actually brute force is increasingly inefficient as ply depth increases. Or if
>you prefer the difference between brute force and selective searches in terms of
>nodes to compute to reach a given ply depth growth exponentially with ply depth.
>
>Today, good selective programs can achieve a "branching factor" which is under 3
>(and that includes the extra work induced by extensions). A well designed brute
>force alpha beta searcher, without extensions, achieves a BF between 4 and 5.
>
>Some time ago I have found that a good brute force alpha beta implementation has
>a BF of 4.3.
>
>I think current top programs have a BF which is close to 2.5, but let's say it's
>2.8 to keep some margin.
>
>
>You can compute the ratio of nodes to search by brute force divide by nodes to
>search by selective search by:
>
>  ratio = 4.3^depth / 2.8^depth         (^ mean "power")
>
>
>Now about the NPS. Deep Blue is supposed to be able to compute 200 million NPS
>(nodes per second). Current top PC programs on fastest hardware (single CPU) can
>compute up to 800.000 NPS, that's 1/250 of what DB can do.
>
>
>At what depth the "ratio" starts to be above 250?
>
>Answer: at ply depth 13.
>
>So I suspect that Deep Blue and current top programs on top hardware (single
>CPU) can reach ply depth 13 in the same time.

You _are_ aware that DB's branching factor was well below 5?  I posted the
analysis here a year ago (Ed probably still has it as he was interested).  I
took the 1997 logs and just computed the ratio using time.  They were nowhere
near 5...  not even near 4...




>
>And it turns out that ply depth 13 can be routinely reached by today's PC
>programs at standard time controls.

Yes.. but don't forget DB was reaching depths of 15-18 in the middlegame,
as their logs from 1997 clearly show...




>
>
>But there are 2 things I'm not really taking into account here:
>1) selective search is less reliable than brute force search
>2) Deep Blue uses something called "Singular extensions" which increases its
>branching factor dramatically over the BF of a simple alpha beta brute force
>algorithm.
>
>
>Point 1 is hard to evaluate.
>
>About point 2 we have some data suggesting that "singular extensions" is an
>extremely expensive algorithm: while PC programs have no problem to reach ply
>depth 13 on current hardware, Deep Blue could not go beyond ply depths 11-12 in
>the 1997 match. Of course in some lines it was computing much deeper.

Not again.  Look at the logs.  11(6) is a +seventeen+ ply search.




>
>It remains to be seen if "Singular extensions" is such an improvement. So far I
>think that nobody has managed to prove that it is. Some people speculate it
>could be effective only if you have a very fast computer, but only the Deep Blue
>experiment suggests this, without further scientific proof.

No.  I used them in later cray blitz versions.  HiTech used them as well.
They have their good and bad points.  Some micro programs have/do use them
as well...



>
>
>All this does not tell the whole story about a DB-PC match, but I hope I have
>given some keys to help understand that the match could be closer than some
>people expect.
>
>And I have only assumed a single processor PC program. A multiprocessor PC
>program would have even greater chances of course.
>
>
>
>
>    Christophe



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.