Author: Robert Hyatt
Date: 07:12:05 01/26/00
Go up one level in this thread
On January 26, 2000 at 01:09:58, Christophe Theron wrote: >On January 26, 2000 at 00:29:00, Robert Hyatt wrote: > >>On January 25, 2000 at 21:26:19, Christophe Theron wrote: >> >>>On January 25, 2000 at 13:33:36, Dave Gomboc wrote: >>> >>>>On January 24, 2000 at 16:10:20, Christophe Theron wrote: >>>> >>>>>However I'm wondering about the singular extension stuff. As I understand the >>>>>cost of detecting singular moves is linear (would not increase the branching >>>>>factor, just add a percentage to the total search time), but the cost of the >>>>>extension itself definitely increases the branching factor (increases the search >>>>>time exponentially). >>>>> >>>>>Of course I have no idea if it would be worse, in term of BF, than the set of >>>>>extensions microcomputers generally use. >>>>> >>>>>I think we can safely assume that their branching factor was above 5, and >>>>>probably significantly higher. And I did not even factor in the extra cost of >>>>>the parallel search. >>>>> >>>>> >>>>> >>>>>>I don't think it would do "worse and worse". Any more than any other program >>>>>>would. Although it might do worse as depth decreases depending on what they >>>>>>did in their eval. >>>>> >>>>> >>>>>With such a "high" branching factor, you can expect to end up doing worse in >>>>>term of average ply depth than a low BF program. >>>>> >>>>>Of course, with their NPS, they start with a huge advantage. But if you draw the >>>>>curve of ply depth versus time for both DB and a modern commercial program, you >>>>>can expect DB's curve to be eventually reached by the commercial's curve. >>>>> >>>>>That's what I meant by "doing worse and worse". I could have written "doing less >>>>>and less good". >>>>> >>>>>Maybe I'm wrong because the singular extension stuff would compensate for this, >>>>>and the pruning system of commercial program would lose important information >>>>>than a pure alphabeta search. But I don't think so. >>>>> >>>>> >>>>>My opinion is that Deep Blue is much stronger than micros just because it has a >>>>>huge NPS. >>>>> >>>>> >>>>>But if you present things like this, it's not very sexy. >>>>> >>>>>So Hsu decided to use a totally different approach than the micro's. >>>>> >>>>>By not using a good known pruning system and introducing a new extension scheme >>>>>of your own, you present yourself as being a pionner. A genius that is so bright >>>>>that he has understood that what everybody else is considering as very good >>>>>(null move or similar recipes) is in fact rubbish. A guru that has invented a >>>>>bright human-like extension: the singular extension! >>>> >>>>Singular (dual, ternary, etc.) extensions were created by observing a need. I'm >>>>sure there are things you've come up with (but not published, perhaps!) where >>>>you've found some aspect of your program lacking, set out to fix it, and found a >>>>way to do so. If you were an academic, at that point you would write up a paper >>>>about it. It has nothing to do with being a guru. >>> >>> >>> >>>If you are an academic, you NEED to write a paper in order to be recognized as >>>such. >>> >>>You need to invent something different. >>> >>>Even if it is not as efficient as what has been published already. >>> >>>DB is a wonderful opportunity for such "new" things. With or without it, your >>>machine is going to be very successful because of the computing power you >>>managed to generate. Just add a new algorithm to it, make good publicity around >>>it, and you get credit for this new algorithm as well. >>> >>>By designing a chess chip, Hsu knows he will only be remembered as a bright >>>"technician". >>> >>>By designing a new algorithm and associating it with a success, he will be >>>remembered as a good theorician. Much better, isn't it? >>> >>>Well done from a PR point of view. Maybe I'm wrong, but this singular extension >>>stuff is so far really suspect to my eyes: Why spend so much time in a new >>>algorithm that has still to prove it is worth the effort, when you could have >>>boosted the power of your machine by merely using null move or something >>>related? >>> >>> >> >>I don't think SE has been found 'bad'. I used it in CB. Bruce is doing it. >>Dave Kittinger did it. Richard Lang did it (I think everyone but CB and >>HiTech implemented it in a less-expensive and less accurate way, but they >>all are getting results that are quite good, still, particularly Bruce >>with Ferret. > > > >I don't think there is any drop of SE in Genius. I see nothing that supports >this idea. I can understand the lines produced by Genius without SE. I don't know about recent versions. Lang specifically said (some years ago at a computer chess event) "The only real difference between genius 1 and genius 2 was the addition of PV singular extensions." That from "the horse's mouth". :) > >Kittinger's programs performances are not advocating in favour of SE, if really >he does that. No offense intended, but in the end if SE is so great it should >help the program to be really at the top... > >I don't know what Bruce does, maybe he could tell us just a little bit of what >he thinks about this... > What he does obviously works well, if you saw his Nolot results. And if you have played him on ICC. Revealing how he does this is up to him, as we had discussed it many times, but I chose to not do it his way to avoid revealing his 'trick'. > > > > >>>>It seems weird to me that when Ed Schroder says Rebel does better without >>>>null-move than with it, people believe it, but people criticize the DB team for >>>>not using it (e.g. from your text above: "by not using a good, known pruning >>>>system..."). >>> >>> >>> >>>If the DB team did not have enough time, they could simply take the null move >>>algorithm because there is documentation available on it. >>> >>>However null move is not the final say. Rebel does very well with ANOTHER >>>pruning system. Junior does very well with ANOTHER pruning system as well. And >>>there are other programs that do fine without null move, one of which I know >>>very well. >>> >>>I guess that adding null move to these programs would degrade their >>>performances, because it would simply be too much pruning. >>> >>>Adding null move to a pure alphabeta searcher like Deep Blue would improve it >>>tremendously, that's what I meant. >>> >>> >> >> >>maybe or maybe not. It isn't clear how a recursive null move search >>interacts with singular extensions. They are sort of "opposites" when you >>think about it. DB's results are already good enough for me. I wish my >>program was that strong... > > > >I don't like the idea of thinking about SE as if it was such a mysterious thing. > >Recapture and check extensions can be considered in many cases as very cheap SE, >and I don't see them interacting in any negative way with various pruning >schemes. Do you see negative effect of these extensions on your null move >Crafty? > >I don't see why this should be unclear. > The problem is that we extend near the root, then null-move away those extensions near the tips. Why add a ply at the root and subtract it near the tip. What was the gain? That is the problem... it makes it harder to evaluate what is singular and what is not, because the depth below the singular test is in a high state of flux with null-move tossed in. May be perfectly safe. May cause instabilities... I have a SE version of Crafty that I fiddle with from time to time. It certainly behaves less well than CB did, and this extreme null-move stuff is one big difference in the two programs.. > > > Christophe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.