Author: Amir Ban
Date: 01:45:24 08/05/98
Go up one level in this thread
On August 04, 1998 at 19:55:55, Fernando Villegas wrote: >Hi both: >Sorry if a miserable non programmer intervenes, but usefulnes of full width >search does not depend on how much nodes your machine can calculate? I presume >that with an ideal, infinitely fast machine, the best you can do is to calculate >absolutely all, no null move, no extensions, no prunning. In less than ideally >fast machines equation alters very much, but Deep Blue, although not infinetly >fast, is enough fast for being near the area of validity of that reasonning. >Maybe if you surpass certain threshold in speed, the old technique recover some >sense. In fact, DB games against K are a prooof of it. >Excuse me :-) >Fernando Scale has nothing to do with it. Not all lines are of equal importance. Some lines have to be analyzed to great depth to decide on what they are worth, others need much less. It's a question of balance. Doing brute-force (all lines the same length) is not the right balance. Making the tree too unbalanced by looking at some lines too little and others too much is also wrong. All programmers know that, and found the right balance for them through theoretical arguments, but mostly by trial-and-error. The technique used to unbalance the search tree is by extensions and forward pruning. These are just flip sides of the same coin. The right way to look at it is this: We are doing iterative deepening, so, with enough time, we are going to find everything there has to be found. We are extending this line because we want to find what happens here now, not at the next iteration. We are pruning this line because we can wait till next iteration to find more about it. There are some bad pruning methods that have problems built into them. A pruning method that acts the same way in every iteration is bad, because it means that when you prune a move, you won't take a deeper look at it next iteration, you are pruning it for good. It's a bad idea to use such methods, because no matter how deep you search, you will not cover errors you make this way. Null-move pruning, for example, has such a problem with zugzwang positions, and every null-mover needs some practical solution to this. But, as long as you stay away from such pruning mistakes, the fact that you do pruning doesn't place any limit on how good your program can be, and if you select a reasonable algorithm, you will certainly be stronger than brute-force search at any scale. The Deep-Thought/Deep-Blue programmers did not want to do any pruning, and this was an article of faith for them. To me this was an illogical stand, because they did do extensions (they even claimed to have invented them, quite falsely I think). If you do extensions, but otherwise brute-force, you are still pruning the entire tree in a sense, because you can search to less brute-force depth in the same time. DT/DB was mainly a hardware project, and that was its main achievement. There was never much to be impressed by their software, and I think their attitude was a waste of silicon. They were spoiled by lack of competition and isolation, and as a result they were also relatively inexperienced, something which shines through from every second sentence in which they talk about the subject. Amir
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.