Author: Robert Hyatt
Date: 20:29:52 07/05/02
Go up one level in this thread
On July 05, 2002 at 19:56:10, Christophe Theron wrote: >On July 04, 2002 at 22:23:34, Robert Hyatt wrote: > >>On July 04, 2002 at 12:10:26, Christophe Theron wrote: >> >>>On July 04, 2002 at 10:07:37, Robert Hyatt wrote: >>> >>>>On July 04, 2002 at 03:49:40, Uri Blass wrote: >>>> >>>>>On July 03, 2002 at 14:29:17, Robert Hyatt wrote: >>>>> >>>>>>On July 03, 2002 at 13:46:17, Christophe Theron wrote: >>>>>> >>>>>>>On July 02, 2002 at 20:20:37, Robert Hyatt wrote: >>>>>>> >>>>>>>>On July 02, 2002 at 18:54:49, Keith Evans wrote: >>>>>>>> >>>>>>>>> >>>>>>>>>Sorry to be anal retentive, but that's a bit of a stretch. Here's what they say: >>>>>>>>> >>>>>>>>>"The chess chips optionally support the use of an external FPGA (Field >>>>>>>>>Programmable Gate Array) to provide access to an external transposition table, >>>>>>>>>more complicated search control, and additional terms for the evaluation >>>>>>>>>function. In theory this mechanism would have allowed the hardware search to >>>>>>>>>approach the efficiency and complexity of the software search. Null move search >>>>>>>>>was also explicitly supported by this method. Due to time constraints, this >>>>>>>>>capability was never used in Deep Blue." >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>Read on. On page 67, section 4.1, item 3, "mate threat". >>>>>>>> >>>>>>>>"It is relatively simple using a null move search to detect if there is a >>>>>>>>threat in the current position.... The Deep Blue implementation ... >>>>>>>> >>>>>>>>Which matches what I said. They had support for a normal null-move search >>>>>>>>had they wanted to use it, but they did use null-move to detect threats, >>>>>>>>something that has been done before (and several of us use a form of mate >>>>>>>>threat extension based on this idea presently). >>>>>>>> >>>>>>>>So they used null-move in at least one way, without using it as a forward >>>>>>>>pruning algorithm, which fits with Hsu's "no errors in the search" theme he >>>>>>>>mentioned repeatedly over the years. Extra extensions were one thing to him, >>>>>>>>but outright errors were something else not to be tolerated. Right or wrong. >>>>>>>>I obviously disagree about the errors in a normal null-move search, but I >>>>>>>>can hardly argue with their success... >>>>>>> >>>>>>> >>>>>>> >>>>>>>That's my point as well. >>>>>>> >>>>>>>I don't argue about their success. >>>>>>> >>>>>>>I'm just saying that they succeeded because their chips were very fast. So fast >>>>>>>that they allowed them to use inferior search techniques and still succeed. >>>>>>> >>>>>> >>>>>>Could you not make the _same_ statement about chess 4.0 in 1975? Until that >>>>>>point _everybody_ was doing forward pruning like mad. They discovered that a >>>>>>a shallower full-width search had fewer errors and they stomped everybody into >>>>>>the ground until everyone converted... >>>>> >>>>>It is different. >>>>>It is obvious that selective search from the first plies >>>>>is a mistake when you have speed. >>>>> >>>>>It also seems obvious that pruning rules that are based >>>>>on the remaining depth is a good idea and you can use them >>>>>and see everything if you search deep enough. >>>>> >>>>>Uri >>>> >>>> >>>>Everybody is overlooking an _important_ detail, so lets take this back to >>>>CS101: >>>> >>>>1. Forward pruning is a form of selective search. You cull moves you think >>>>are no good, so that the rest are basically "extended" or searched deeper than >>>>the "lemon" moves. >>>> >>>>2. Search extensions do _exactly_ the same thing. They extend the moves you >>>>think are "good" so that they are searched more deeply, while the ones you >>>>do not extend are not searched that deep. >>>> >>>>In simple terms, the two ideas are _identical_ in every way, as far as the >>>>final result. To say that doing a full-width search with lots of very >>>>sophisticated extensions is not as good as doing a sophisticated selective >>>>search (forward pruning) is not a particularly sensible statement to make. >>>> >>>>_anybody_ that has spent any time on tree-searching will realize that _either_ >>>>will produce _exactly_ the same result assuming the extensions and forward- >>>>pruning are done with the same skill level. >>>> >>>>So picking on this aspect of deep blue is simply a strawman argument. They >>>>clearly do more extensions than the rest of us. Which _may_ offset their >>>>lack of forward pruning. Believing or claiming anything else shows a lack >>>>of understanding of something... >>> >>> >>> >>>In this case, claiming that you are doing brute force just because you do not >>>want errors in your search is also a lack of understanding. >>> >>>Didn't Hsu say this? Aren't you repeating his words every time you can? >> >>So? >> >> >>> >>>First your point is that they have picked brute force because they had enough >>>power and did not want mistakes in the search, and now you are saying that they >>>had a selective search and that it is equivalent to what can be achieved with >>>strong pruning. >> >>I said the two results can be _identical_. This is covered in most good >>AI books that talk in any detail about minimax search... >> >> >> >> >>> >>>Thinking about it, it seems that you can indeed get the same search enveloppe by >>>either pruning or extending. >> >> >>:) As I said... >> >> >>> >>>But thinking twice about it, I think that it is not possible with the search >>>extension techniques used in Deep Blue to get something equivalent to the simple >>>and efficient pruning techniques we know today. >> >> >>Based on what? > > > >Based on the description they have made of it. > > > > >>I _despise_ hearing that kind of statement with _zero_ >>testing to support it. Very similar to the "bitmaps can't do this or >>that" statements that show up from time to time. And which I find very >>amusing as a bitboard practitioner. > > >I think you don't get what I said. > >I'm just saying that given their framework (which is described in their >publication) one cannot get a search enveloppe equivalent to the enveloppe you >get with currently known pruning techniques.\ Why? Did you see the part where they extend 2 plies at times? That is _all_ you need to behave just like null-move, which shortens some paths by 2 plies.... > >I'm not talking here about the superiority of one system on the other. > >You were talking about the classic idea that one can get the same search >enveloppe with either pruning or extensions. > >Actually there is no discussion here. It is true, in theory. > >That started to make me think about: "how can I get the same enveloppe by using >extensions instead of pruning" (in Chess Tiger for example). > >And suddenly I find myself thinking about ideas I had never met before. > >Here I'm not trying to oppose your ideas. Actually I have forked out of the >initial discussion about Deep Blue. > I think either approach is very interesting. I have done both although I haven't done forward pruning in a _long_ time (other than null-move of course). > > > >>Their search definitely "worked". That seems to be all that counts in the >>game of chess. Wins and losses.. >> >> >> >> >>> >>>I smell that there is something important behind this and I will have to think >>>more about it. That's an interesting research area. >>> >>> >> >>It is also well-known. The two approaches are totally complementary. Same >>final result. Totally different ways of getting there. > > > >That's what I wanted to say. > >For example you are at 5 plies before the horizon and decide to stop searching >here. > >What is the equivalent of this when one is using extensions? You extend all the _other_ moves except here. > >In other terms, can the definition of extensions be expanded to cover both >"classical extensions" schemes AND pruning? I don't see why not. IE the only difference is going to be the iteration depth you report. Which is better? reporting 10 but extending 2 plies, or reporting 12 but cutting most stuff off at 10 plies? IE isn't a "forward prune" basically a "negative extension"?? > >I'm always interested in generalizations, as they can help to uncover new ideas. > >I don't remember having read a paper on this. > > > > >>>>As far as your selective search comments, It is obvious (to me) that everybody >>>>is not doing selectivity just deeply in the tree. It is being done near the >>>>root as well, based on some very trivial oversights that some programs make from >>>>time to time. Oversights that a 4 ply full-width search would see. >>> >>> >>> >>>It's not as simple as that. >>> >>>"Near the root" can mean several different things. >>> >>>You can apply some kind of gross pruning system near the root and make big >>>shortsighted mistakes. >>> >>>You can also apply some detection near the root and collect information to prune >>>later. Then you don't make such big mistakes. >>> >>>The argument that pruning will make obvious blunders sometimes is simply wrong. >> >>That argument is provable. Several have shown positions that Tiger simply >>can not see. The last one posted here you replied "the forward pruning >>simply misses that..." > > >Yes I remember. > >It was Fernando using Chess Tiger for Palm in blitz. > >The program was reaching ply depth 3 and missed a fork (or something like that) >at the second ply. > >I'm not sure you could catch the PC version as easily, even at bullet time >controls. :) > The one I recall wasn't a palm. It was a normal tiger. IE people regularly report positions that fritz can simply not solve, period. Because the position zaps null-move searchers. The same thing will happen to _any_ program that does forward pruning, since there is no way to make it perfect enough to not discard a good move that looks horrible by any imaginable rule. BTW, Tiger/Fritz aren't the only programs that have "killer positions". I have more than enough for my program... > > > > Christophe
This page took 0.02 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.