Author: Robert Hyatt
Date: 03:35:02 08/05/98
Go up one level in this thread
On August 05, 1998 at 04:45:24, Amir Ban wrote: >On August 04, 1998 at 19:55:55, Fernando Villegas wrote: > >>Hi both: >>Sorry if a miserable non programmer intervenes, but usefulnes of full width >>search does not depend on how much nodes your machine can calculate? I presume >>that with an ideal, infinitely fast machine, the best you can do is to calculate >>absolutely all, no null move, no extensions, no prunning. In less than ideally >>fast machines equation alters very much, but Deep Blue, although not infinetly >>fast, is enough fast for being near the area of validity of that reasonning. >>Maybe if you surpass certain threshold in speed, the old technique recover some >>sense. In fact, DB games against K are a prooof of it. >>Excuse me :-) >>Fernando > > >Scale has nothing to do with it. Not all lines are of equal importance. Some >lines have to be analyzed to great depth to decide on what they are worth, >others need much less. It's a question of balance. Doing brute-force (all lines >the same length) is not the right balance. Making the tree too unbalanced by >looking at some lines too little and others too much is also wrong. All >programmers know that, and found the right balance for them through theoretical >arguments, but mostly by trial-and-error. > we may have a terminology problem, in that I (and several others) don't consider "brute force" to be a full-width search following all branches to a fixed depth. I've always considered "brute force" to be the opposite of "forward pruning" and that's how it generally shows up in the literature I use... In the early days (Greenblatt's program is an example) heavy forward pruning was used... right on up through chess 3.x in 1975. In 1976 you find another reference to "brute force rather than forward pruning" in the slate/atkin paper, even though they did the check extension... I was a "forward-pruner" until 1978, when hardware speeds made full-width work for a change... >The technique used to unbalance the search tree is by extensions and forward >pruning. These are just flip sides of the same coin. The right way to look at it >is this: We are doing iterative deepening, so, with enough time, we are going to >find everything there has to be found. We are extending this line because we >want to find what happens here now, not at the next iteration. We are pruning >this line because we can wait till next iteration to find more about it. > I agree (to an extent) with your "flip side of the coin" in that this is fiddling with the search depth, reducing some branches so that others can be searched deeper. But it is not the same as forward pruning, because there you can miss a 2-mover even though you search 6 full moves deep along the shallowest branches. >There are some bad pruning methods that have problems built into them. A pruning >method that acts the same way in every iteration is bad, because it means that >when you prune a move, you won't take a deeper look at it next iteration, you >are pruning it for good. It's a bad idea to use such methods, because no matter >how deep you search, you will not cover errors you make this way. Null-move >pruning, for example, has such a problem with zugzwang positions, and every >null-mover needs some practical solution to this. > >But, as long as you stay away from such pruning mistakes, the fact that you do >pruning doesn't place any limit on how good your program can be, and if you >select a reasonable algorithm, you will certainly be stronger than brute-force >search at any scale. this I'm not sure about, because there's no evidence to support nor contradict it. IE what says we *must* do some sort of forward pruning, and that if we do, we will be stronger than programs that don't? In 1975 everyone was doing forward pruning, in 1976 chess 4.0 walked over everyone by not making any 6 ply tactical mistakes at all. > >The Deep-Thought/Deep-Blue programmers did not want to do any pruning, and this >was an article of faith for them. To me this was an illogical stand, because >they did do extensions (they even claimed to have invented them, quite falsely I >think). If you do extensions, but otherwise brute-force, you are still pruning >the entire tree in a sense, because you can search to less brute-force depth in >the same time. I'm not sure what you mean about "claimed to have invented them, quite falsely..." but doing extensions and forward pruning are different things entirely. Because they affect two different dimensions of the search tree. > >DT/DB was mainly a hardware project, and that was its main achievement. There >was never much to be impressed by their software, and I think their attitude was >a waste of silicon. They were spoiled by lack of competition and isolation, and >as a result they were also relatively inexperienced, something which shines >through from every second sentence in which they talk about the subject. > >Amir "Inexperienced" doesn't come close to the truth. I've known em all. Murray was the one that first suggested the PVS algorithm to me while in Washington DC at an ACM event in 1978. We made the changes to "blitz" (at the time) and tried them and it looked quite decent. He's been around a long time now and probably knows as much about "search" as any of us. Hsu is another story alto- gether. Once you get to know him, you discover that he is not just bright, he is bright as hell... What they did wasn't "trivial" and it sure wasn't "easy". I only wish that "DB Junior" would come out and play some on ICC so that everyone can see just how strong that box is (I'd bet it would erase "Scratchy's" record as having the best win/lose record of all time on ICC.) And then everyone could only try to imagine what it plays like when it is 16 times faster in "full-blown DB mode" on a real SP...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.