Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What is the public's opinion about the result of a match between DB and

Author: Christophe Theron

Date: 12:42:59 04/26/01

Go up one level in this thread


On April 25, 2001 at 20:02:27, Vincent Diepeveen wrote:

>On April 25, 2001 at 13:25:40, Christophe Theron wrote:
>
>>On April 25, 2001 at 00:59:51, Eugene Nalimov wrote:
>>
>>>On April 24, 2001 at 23:56:26, Christophe Theron wrote:
>>>
>>>>On April 24, 2001 at 20:38:49, Robert Hyatt wrote:
>>>>>
>>>>> [...]
>>>>>
>>>>>You _are_ aware that DB's branching factor was well below 5?  I posted the
>>>>>analysis here a year ago (Ed probably still has it as he was interested).  I
>>>>>took the 1997 logs and just computed the ratio using time.  They were nowhere
>>>>>near 5...  not even near 4...
>>>>
>>>>
>>>>
>>>>
>>>>I was not aware of that.
>>>>
>>>>But given that Deep Blue reached ply depths higher than what I have written
>>>>above, I have good reasons to believe that their BF was indeed well below 4.
>>>>
>>>>So they were using a selective algorithm.
>>>>
>>>>I wonder what it was.
>>>
>>>Bob mentioned that they used futility pruning in their hardware search. But I
>>>believe that they can have more-or-less same branching factor even without it.
>>
>>
>>
>>Right. Futility pruning can be used only in the last plies of the search, so its
>>effect can be considered LINEAR (as soon as the depth is clearly higher than the
>>number of the last plies on which you apply futility). On the other hand the
>>effect of branching factor is EXPONENTIAL.
>>
>>
>>
>>
>>>Today's micro program (e.g. some-Tiger) prunes a lot using null move and
>>>undoubtely a lot of other secret heuristics. Deep Blue does not prune; instead
>>>it extends some lines like crazy. Net result is the same -- both programs have
>>>highly unbalanced trees, so I don't see why their branching factor cannot be
>>>(roughly) the same.
>>
>>
>>
>>Then you need to think twice about it.
>>
>>The effect of extensions is to increase the branching factor. On the other hand,
>>you also increase the average ply depth you are looking at.
>>
>>So if you take the nominal ply depth to measure the branching factor (which is
>>the usual definition), then you undoubtly increase the BF with extensions.
>>
>>If you take the "average ply depth reached" to measure the branching factor,
>>then you might have again a better branching factor. But I do not know how to
>>define "average ply depth". I understand intuitively what it is, but I cannot
>>give you a definition.
>>
>>However I understand what you mean. You say that selectivity or extensions can
>>be used to achieve approximately the same goal, which is to unbalance the search
>>tree in the way human players do.
>>
>>
>>
>>
>>> The difference is number of nodes in the trees, but of
>>>course Deep Blue team can allow it to be orders higher than micro developer...
>>>And of course with their approach search results are much more reliable; their
>>>N-plies search is really N-plies search.
>>
>>
>>
>>No doubt that if it used the kind of selective algorithms that the micro are
>>using, then Deep Blue would be orders of magnitude stronger (because it is at
>>least 250 times faster).
>>
>>However I'm not really sure its selective algorithms (I mean the mixture of
>>selectivity and extensions it is using) are as efficient as the ones we have in
>>micro programs.
>>
>>Actually I'm pretty sure they are not. These algorithms are what takes years to
>>be developped in a chess program, and the Deep Blue team was clearly not very
>>skilled in this area.
>>
>>At least not as skilled as top chess programmers are. In this area.
>>
>>
>>
>>
>>>>>>And it turns out that ply depth 13 can be routinely reached by today's PC
>>>>>>programs at standard time controls.
>>>>>
>>>>>Yes.. but don't forget DB was reaching depths of 15-18 in the middlegame,
>>>>>as their logs from 1997 clearly show...
>>>>
>>>>
>>>>
>>>>Yes, it looks like I was badly informed.
>>>>
>>>>Knowing that, I now believe that no micro program of today would have any
>>>>reasonnable chance against Deep Blue.
>>>>
>>>>Against a single chip maybe, not against the whole thing.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>>But there are 2 things I'm not really taking into account here:
>>>>>>1) selective search is less reliable than brute force search
>>>>>>2) Deep Blue uses something called "Singular extensions" which increases its
>>>>>>branching factor dramatically over the BF of a simple alpha beta brute force
>>>>>>algorithm.
>>>>>>
>>>>>>
>>>>>>Point 1 is hard to evaluate.
>>>>>>
>>>>>>About point 2 we have some data suggesting that "singular extensions" is an
>>>>>>extremely expensive algorithm: while PC programs have no problem to reach ply
>>>>>>depth 13 on current hardware, Deep Blue could not go beyond ply depths 11-12 in
>>>>>>the 1997 match. Of course in some lines it was computing much deeper.
>>>>>
>>>>>Not again.  Look at the logs.  11(6) is a +seventeen+ ply search.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>It remains to be seen if "Singular extensions" is such an improvement. So far I
>>>>>>think that nobody has managed to prove that it is. Some people speculate it
>>>>>>could be effective only if you have a very fast computer, but only the Deep Blue
>>>>>>experiment suggests this, without further scientific proof.
>>>>>
>>>>>No.  I used them in later cray blitz versions.  HiTech used them as well.
>>>>>They have their good and bad points.  Some micro programs have/do use them
>>>>>as well...
>>>>
>>>>
>>>>
>>>>I'm sorry but I still have to read a study about the success of this extension.
>>>
>>>There was article in the ICCA journal (in 1989?). Later they changed their mind
>>>about it, but later they changed it once again. During his last lecture at MS
>>>Hsu told that this extension is very important one.
>>
>>
>>
>>What I mean is the improvement coming from this stuff (if any) is a particle of
>>dust compared to the strength they get from speed.
>>
>>And even if it is in fact making the program weaker, then nobody can ever
>>notice.
>>
>>Apparently they did not care much about showing it was really better, or else
>>they would have studied it more carefully.
>>
>>What I mean is that with my very modest means I am able to detect if a change in
>>my program is improving the strength or not, so how can it be that they have not
>>been able to do the same with SE?
>
>I'm nodding bigtime here. SE is good to solve testsets, but not near
>what recaptures and check extensions give.
>As far as we know Deep Blue only played 10 practice games of around 30 0
>level against Rebel 8.
>
>So that's very little testing. So their only data for SE are tactical
>positions i assume.
>
>This means simply that i have way more data about SE as they have.
>
>Jan Louwman is auto232 playing a lot of games for me nowadays,
>for which i thank him bigtime!
>
>Several versions jan had there played WITH recapture extension and
>with Singular extension.
>
>I did not notice *any* difference whatsoever at levels of around 3 hours
>a game or more (k7-900 to 6 hours a game celeron500-celeron500)
>
>However at level of 30 0 the difference was very obvious in the advantage
>of the versions which did all kind of extensions!
>
>I don't need to mention that the SE version hardly gets over 10 plies
>of search depth. Note SE only costs a ply or so at 3 hours a game.
>The real expensive things were extending captures, recaptures especially.
>
>Those cost me the biggest part of the search depth.
>
>Yet searching dual i see the difference very quick between the different
>versions the SE + recapture + matethreat version simply is dying when
>getting around 10 plies of depth.
>
>I use it btw with a reduction factor of 3 ply so i can only apply it
>at 4 plies from the leaves. Whereas DB most likely uses it only 6 plies
>away from the leaves.
>
>I also tried it at 5 and 6 plies away from the leaves, but most tactical
>increase in solving testset positions are the check extensions and
>recaptures.
>
>Check doesn't cost me much actually but those extensions are over the years
>heavily worked at.
>
>However SE DECREASES the b.f. considerable.
>Even worse are recaptures.
>
>So it's questionable how long i keep those extensions inside my
>program!
>
>I already threw out recaptures. It just costs me too much!
>
>Now i need to note here that in qsearch i do not prune on alpha,
>so a lot of tactical things which others NEED to see using recaptures
>i detect in qsearch as i try all interesting moves there.
>
>>
>>
>>
>>
>>>>Anyway, if it is such a small improvement that it is not even clear by now clear
>>>>if it is good or not, then in my opinion the main (and maybe only) strength of
>>>>Deep Blue is its huge NPS.
>>>>
>>>>But that's not new.
>>>>
>>>>That's why I see Deep Blue as a technical and financial success, and just that,
>>>>certainly not as a breakthrough in computer chess.
>>>
>>>Then Moon landing also was not a breakthrough. In principle everything was
>>>doable (and even simple), USA need only to create the organization (I believe
>>>Apollo was the 2nd complex program in the human history), put a lot of money
>>>into it, develop the technology, and go forward. Nevertheless I believe Apollo
>>>*was* a breakthrough (it results were not used, but that's another story).
>>>Exactly the same is Deep Blue -- when you are putting together a lot of money
>>>and very competent team, results usually are very good.
>>
>>
>>
>>It's not a breakthrough, it's a milestone.
>>
>>Null-move is a breakthrough. Deep Blue is a milestone.
>
>I do not know who invented nullmove. Some one already talked in 1979
>about it, but not called nullmove then. David Wilkins did in his paper
>about Paradise.
>
>Who invented nullmove unofficially?
>
>I heart rumours that some commercial did around 1986 or sooner?
>
>Around 1989 Frans Morsch started to use it in normal search
>everywhere and always (recursive)?
>
>3 things speed me up bigtime
>  a) alphabeta pruning
>  b) hashtables
>  c) last plies pruning with qsearch (nullmove actually with qsearch)
>  d) recursive nullmove in normal search



Sounds like C and D are the same. Or different implementations of the same idea.




>A and C you of course can invent yourself easily, though the
>usage of a qsearch i find very elegant. Most commercials you probably
>included with Tiger know a lot about pruning last plies.



Yes, as you noticed I dare to do some very risky pruning in the last plies, that
cost me a tactical loss from time to time (more often than I'd wish
unfortunately).

Needless to say you need years of experimenting before you can tune this
correctly.




>B is of course also not so logical but doable and with a load of
>RAM one invents it easily. Also Zobrist hashing
>is very natural to figure out, though if i had to figure it out
>myself i would probably go for a similar algorithm which would
>eat a few cycles more.
>
>The real hard thing to figure out is the recursive nullmove in search.
>
>The inventor of that really needs a big award!



It is a GREAT idea, and I can say this because:
1) it is not obvious to find
2) once it has been found it sounds obvious

These are the characteristics of great ideas. They change your way if looking at
things. That's why you don't find it BEFORE, and AFTERWARDS you wonder how you
managed to miss it.




    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.