Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: Robert Hyatt

Date: 21:29:00 01/25/00

Go up one level in this thread


On January 25, 2000 at 21:26:19, Christophe Theron wrote:

>On January 25, 2000 at 13:33:36, Dave Gomboc wrote:
>
>>On January 24, 2000 at 16:10:20, Christophe Theron wrote:
>>
>>>However I'm wondering about the singular extension stuff. As I understand the
>>>cost of detecting singular moves is linear (would not increase the branching
>>>factor, just add a percentage to the total search time), but the cost of the
>>>extension itself definitely increases the branching factor (increases the search
>>>time exponentially).
>>>
>>>Of course I have no idea if it would be worse, in term of BF, than the set of
>>>extensions microcomputers generally use.
>>>
>>>I think we can safely assume that their branching factor was above 5, and
>>>probably significantly higher. And I did not even factor in the extra cost of
>>>the parallel search.
>>>
>>>
>>>
>>>>I don't think it would do "worse and worse".  Any more than any other program
>>>>would.  Although it might do worse as depth decreases depending on what they
>>>>did in their eval.
>>>
>>>
>>>With such a "high" branching factor, you can expect to end up doing worse in
>>>term of average ply depth than a low BF program.
>>>
>>>Of course, with their NPS, they start with a huge advantage. But if you draw the
>>>curve of ply depth versus time for both DB and a modern commercial program, you
>>>can expect DB's curve to be eventually reached by the commercial's curve.
>>>
>>>That's what I meant by "doing worse and worse". I could have written "doing less
>>>and less good".
>>>
>>>Maybe I'm wrong because the singular extension stuff would compensate for this,
>>>and the pruning system of commercial program would lose important information
>>>than a pure alphabeta search. But I don't think so.
>>>
>>>
>>>My opinion is that Deep Blue is much stronger than micros just because it has a
>>>huge NPS.
>>>
>>>
>>>But if you present things like this, it's not very sexy.
>>>
>>>So Hsu decided to use a totally different approach than the micro's.
>>>
>>>By not using a good known pruning system and introducing a new extension scheme
>>>of your own, you present yourself as being a pionner. A genius that is so bright
>>>that he has understood that what everybody else is considering as very good
>>>(null move or similar recipes) is in fact rubbish. A guru that has invented a
>>>bright human-like extension: the singular extension!
>>
>>Singular (dual, ternary, etc.) extensions were created by observing a need.  I'm
>>sure there are things you've come up with (but not published, perhaps!) where
>>you've found some aspect of your program lacking, set out to fix it, and found a
>>way to do so.  If you were an academic, at that point you would write up a paper
>>about it.  It has nothing to do with being a guru.
>
>
>
>If you are an academic, you NEED to write a paper in order to be recognized as
>such.
>
>You need to invent something different.
>
>Even if it is not as efficient as what has been published already.
>
>DB is a wonderful opportunity for such "new" things. With or without it, your
>machine is going to be very successful because of the computing power you
>managed to generate. Just add a new algorithm to it, make good publicity around
>it, and you get credit for this new algorithm as well.
>
>By designing a chess chip, Hsu knows he will only be remembered as a bright
>"technician".
>
>By designing a new algorithm and associating it with a success, he will be
>remembered as a good theorician. Much better, isn't it?
>
>Well done from a PR point of view. Maybe I'm wrong, but this singular extension
>stuff is so far really suspect to my eyes: Why spend so much time in a new
>algorithm that has still to prove it is worth the effort, when you could have
>boosted the power of your machine by merely using null move or something
>related?
>
>

I don't think SE has been found 'bad'.  I used it in CB.  Bruce is doing it.
Dave Kittinger did it.  Richard Lang did it (I think everyone but CB and
HiTech implemented it in a less-expensive and less accurate way, but they
all are getting results that are quite good, still, particularly Bruce
with Ferret.



>
>
>>It seems weird to me that when Ed Schroder says Rebel does better without
>>null-move than with it, people believe it, but people criticize the DB team for
>>not using it (e.g. from your text above: "by not using a good, known pruning
>>system...").
>
>
>
>If the DB team did not have enough time, they could simply take the null move
>algorithm because there is documentation available on it.
>
>However null move is not the final say. Rebel does very well with ANOTHER
>pruning system. Junior does very well with ANOTHER pruning system as well. And
>there are other programs that do fine without null move, one of which I know
>very well.
>
>I guess that adding null move to these programs would degrade their
>performances, because it would simply be too much pruning.
>
>Adding null move to a pure alphabeta searcher like Deep Blue would improve it
>tremendously, that's what I meant.
>
>


maybe or maybe not.  It isn't clear how a recursive null move search
interacts with singular extensions.  They are sort of "opposites" when you
think about it.  DB's results are already good enough for me.  I wish my
program was that strong...




>
>>  Why is it such an impossibility for DB's selective search to not
>>require it when some PC programs don't use it either?
>
>
>Every competitive PC program uses a selective search.
>
>Null move is only one selective system amongst others.
>
>
>
>    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.