Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What is the public's opinion about the result of a match between DB and

Author: Christophe Theron

Date: 10:25:40 04/25/01

Go up one level in this thread


On April 25, 2001 at 00:59:51, Eugene Nalimov wrote:

>On April 24, 2001 at 23:56:26, Christophe Theron wrote:
>
>>On April 24, 2001 at 20:38:49, Robert Hyatt wrote:
>>>
>>> [...]
>>>
>>>You _are_ aware that DB's branching factor was well below 5?  I posted the
>>>analysis here a year ago (Ed probably still has it as he was interested).  I
>>>took the 1997 logs and just computed the ratio using time.  They were nowhere
>>>near 5...  not even near 4...
>>
>>
>>
>>
>>I was not aware of that.
>>
>>But given that Deep Blue reached ply depths higher than what I have written
>>above, I have good reasons to believe that their BF was indeed well below 4.
>>
>>So they were using a selective algorithm.
>>
>>I wonder what it was.
>
>Bob mentioned that they used futility pruning in their hardware search. But I
>believe that they can have more-or-less same branching factor even without it.



Right. Futility pruning can be used only in the last plies of the search, so its
effect can be considered LINEAR (as soon as the depth is clearly higher than the
number of the last plies on which you apply futility). On the other hand the
effect of branching factor is EXPONENTIAL.




>Today's micro program (e.g. some-Tiger) prunes a lot using null move and
>undoubtely a lot of other secret heuristics. Deep Blue does not prune; instead
>it extends some lines like crazy. Net result is the same -- both programs have
>highly unbalanced trees, so I don't see why their branching factor cannot be
>(roughly) the same.



Then you need to think twice about it.

The effect of extensions is to increase the branching factor. On the other hand,
you also increase the average ply depth you are looking at.

So if you take the nominal ply depth to measure the branching factor (which is
the usual definition), then you undoubtly increase the BF with extensions.

If you take the "average ply depth reached" to measure the branching factor,
then you might have again a better branching factor. But I do not know how to
define "average ply depth". I understand intuitively what it is, but I cannot
give you a definition.

However I understand what you mean. You say that selectivity or extensions can
be used to achieve approximately the same goal, which is to unbalance the search
tree in the way human players do.




> The difference is number of nodes in the trees, but of
>course Deep Blue team can allow it to be orders higher than micro developer...
>And of course with their approach search results are much more reliable; their
>N-plies search is really N-plies search.



No doubt that if it used the kind of selective algorithms that the micro are
using, then Deep Blue would be orders of magnitude stronger (because it is at
least 250 times faster).

However I'm not really sure its selective algorithms (I mean the mixture of
selectivity and extensions it is using) are as efficient as the ones we have in
micro programs.

Actually I'm pretty sure they are not. These algorithms are what takes years to
be developped in a chess program, and the Deep Blue team was clearly not very
skilled in this area.

At least not as skilled as top chess programmers are. In this area.




>>>>And it turns out that ply depth 13 can be routinely reached by today's PC
>>>>programs at standard time controls.
>>>
>>>Yes.. but don't forget DB was reaching depths of 15-18 in the middlegame,
>>>as their logs from 1997 clearly show...
>>
>>
>>
>>Yes, it looks like I was badly informed.
>>
>>Knowing that, I now believe that no micro program of today would have any
>>reasonnable chance against Deep Blue.
>>
>>Against a single chip maybe, not against the whole thing.
>>
>>
>>
>>
>>
>>>>But there are 2 things I'm not really taking into account here:
>>>>1) selective search is less reliable than brute force search
>>>>2) Deep Blue uses something called "Singular extensions" which increases its
>>>>branching factor dramatically over the BF of a simple alpha beta brute force
>>>>algorithm.
>>>>
>>>>
>>>>Point 1 is hard to evaluate.
>>>>
>>>>About point 2 we have some data suggesting that "singular extensions" is an
>>>>extremely expensive algorithm: while PC programs have no problem to reach ply
>>>>depth 13 on current hardware, Deep Blue could not go beyond ply depths 11-12 in
>>>>the 1997 match. Of course in some lines it was computing much deeper.
>>>
>>>Not again.  Look at the logs.  11(6) is a +seventeen+ ply search.
>>>
>>>
>>>
>>>
>>>>
>>>>It remains to be seen if "Singular extensions" is such an improvement. So far I
>>>>think that nobody has managed to prove that it is. Some people speculate it
>>>>could be effective only if you have a very fast computer, but only the Deep Blue
>>>>experiment suggests this, without further scientific proof.
>>>
>>>No.  I used them in later cray blitz versions.  HiTech used them as well.
>>>They have their good and bad points.  Some micro programs have/do use them
>>>as well...
>>
>>
>>
>>I'm sorry but I still have to read a study about the success of this extension.
>
>There was article in the ICCA journal (in 1989?). Later they changed their mind
>about it, but later they changed it once again. During his last lecture at MS
>Hsu told that this extension is very important one.



What I mean is the improvement coming from this stuff (if any) is a particle of
dust compared to the strength they get from speed.

And even if it is in fact making the program weaker, then nobody can ever
notice.

Apparently they did not care much about showing it was really better, or else
they would have studied it more carefully.

What I mean is that with my very modest means I am able to detect if a change in
my program is improving the strength or not, so how can it be that they have not
been able to do the same with SE?





>>Anyway, if it is such a small improvement that it is not even clear by now clear
>>if it is good or not, then in my opinion the main (and maybe only) strength of
>>Deep Blue is its huge NPS.
>>
>>But that's not new.
>>
>>That's why I see Deep Blue as a technical and financial success, and just that,
>>certainly not as a breakthrough in computer chess.
>
>Then Moon landing also was not a breakthrough. In principle everything was
>doable (and even simple), USA need only to create the organization (I believe
>Apollo was the 2nd complex program in the human history), put a lot of money
>into it, develop the technology, and go forward. Nevertheless I believe Apollo
>*was* a breakthrough (it results were not used, but that's another story).
>Exactly the same is Deep Blue -- when you are putting together a lot of money
>and very competent team, results usually are very good.



It's not a breakthrough, it's a milestone.

Null-move is a breakthrough. Deep Blue is a milestone.




    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.