Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: Christophe Theron

Date: 13:10:20 01/24/00

Go up one level in this thread


On January 24, 2000 at 09:21:32, Robert Hyatt wrote:

>On January 23, 2000 at 22:56:04, Christophe Theron wrote:
>
>>On January 23, 2000 at 03:35:35, Bruce Moreland wrote:
>>
>>>On January 23, 2000 at 02:51:55, Amir Ban wrote:
>>>
>>>>The results can be disregarded on these grounds of course, but it's also true
>>>>that the results, as reported, can be dismissed as being in contradiction to the
>>>>DB/DT public record, and to common sense in general.
>>>
>>>Here are some ideas about what might have happened in those games:
>>>
>>>1) DB Jr may have beaten those programs purely through eval function
>>>superiority.
>>>
>>>2) It may have won because of superior search.
>>>
>>>3) There may have been a poor comparison between node rates, resulting in DB Jr
>>>having a massive hardware advantage.
>>>
>>>4) The whole thing may be ficticious.
>>>
>>>5) Random chance.
>>>
>>>6) Something I haven't thought of yet.
>>>
>>>Bob may go nuts because I included #4.  I don't believe that #4 is true, but
>>>someone can always claim that it is, and there is no obvious evidence that can
>>>be used to refute this claim, which disadvantages us who want to understand this
>>>rather than argue religion and conspiracies all day.
>>>
>>>#1 is what we are expected to believe, I thought that is what this test was
>>>supposed to measure.  I have a very hard time with this one.  I don't believe
>>>there are any terms that in and of themselves would result in such a lopsided
>>>match.  I don't believe that I could set up my program to search exactly a
>>>hundred million nodes per search, and play it against the best eval function I
>>>could possibly write, also searching a hundred million nodes per search, and
>>>score 38-2.
>>
>>
>>I totally agree with you here.
>>
>>
>>
>>>Could I be convinced that #1 is true?  You bet!  Will I accept that #1 is true
>>>based upon faith in the reputations of Hsu and Campbell?  With all due respect,
>>>not a chance.  I don't think anyone should be expected to be so trusting in a
>>>field that's even remotely scientific.
>>>
>>>It would also be hard to accept #2, since DB is supposedly not optimized for
>>>short searches.  And I believe that you've ruled out #5, which seems a sensible
>>>thing to do.
>>
>>
>>I haven't followed the discussion, but on short searches, how many plies deeper
>>do you need to compute to get a 38-2 result?
>>
>>My guess is that 3 plies and a bit of luck would do it easily. You need to be
>>100 to 200 times faster than your opponent to achieve this (less if you have a
>>decent branching factor, but DB has not).
>>
>>I think this is a very easy experiment to do.
>>
>>DB is definitely optimized for short searches, if you think about it.
>>
>>It has the best NPS of all times, and probably one of the worse branching factor
>>you can imagine, because of this crazy singular extension and lack of null move
>>(or related) optimization.
>>
>>So I would say that compared to modern microcomputer programs it would perform
>>worse and worse as time control increases.
>>
>>
>>Maybe I missed something?
>>
>>
>>    Christophe
>
>
>Their branching factor didn't look bad to me, knowing they don't do null-move.
>It seemed to stick between 5 and 6 most of the time, which is roughly normal
>for alpha/beta (it should average roughly sqrt(38) if there are 38 legal moves.)


Yes, I was not meaning it was so bad, but just worse than what you get with a
decent pruning system.

Their branching factor might actually be worse in the very last plies (the ones
computed inside the chess chips) because they lack HT optimization there.

Then the plies computed by software should have a better BF as they are using a
hash table there.

So for a deep enough search the resulting BF might be as good as a good
alpha-beta ordered search can be, typically between 5 or 6 (and usually close to
5).

However I'm wondering about the singular extension stuff. As I understand the
cost of detecting singular moves is linear (would not increase the branching
factor, just add a percentage to the total search time), but the cost of the
extension itself definitely increases the branching factor (increases the search
time exponentially).

Of course I have no idea if it would be worse, in term of BF, than the set of
extensions microcomputers generally use.

I think we can safely assume that their branching factor was above 5, and
probably significantly higher. And I did not even factor in the extra cost of
the parallel search.



>I don't think it would do "worse and worse".  Any more than any other program
>would.  Although it might do worse as depth decreases depending on what they
>did in their eval.


With such a "high" branching factor, you can expect to end up doing worse in
term of average ply depth than a low BF program.

Of course, with their NPS, they start with a huge advantage. But if you draw the
curve of ply depth versus time for both DB and a modern commercial program, you
can expect DB's curve to be eventually reached by the commercial's curve.

That's what I meant by "doing worse and worse". I could have written "doing less
and less good".

Maybe I'm wrong because the singular extension stuff would compensate for this,
and the pruning system of commercial program would lose important information
than a pure alphabeta search. But I don't think so.


My opinion is that Deep Blue is much stronger than micros just because it has a
huge NPS.


But if you present things like this, it's not very sexy.

So Hsu decided to use a totally different approach than the micro's.

By not using a good known pruning system and introducing a new extension scheme
of your own, you present yourself as being a pionner. A genius that is so bright
that he has understood that what everybody else is considering as very good
(null move or similar recipes) is in fact rubbish. A guru that has invented a
bright human-like extension: the singular extension!

In fact, Hsu is more a hardware and public relation genius than a technical
chess programmer genius.

He had so much power available, that he could afford spoiling it just to look
brighter than every other chess programmers.

In my opinion, Deep Blue as it has been programmed is much weaker than what it
could have been. That is not to say it is not very strong.



>Their branching factor can be roughly computed from looking at the logs.


I have not checked myself, I hope somebody will try to evaluate it.



    Christophe



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.