Author: Ratko V Tomic
Date: 18:05:36 10/27/99
Go up one level in this thread
>Argh. I had a long reply typed in, but Netscape cratered on me during my last >paragraph. It managed to do this two posts ago too. :( Netscape will sometimes time-out waiting on CCC server. To save myself having to retype, I've got in a habit of marking the text and pressing Ctrl-Ins before sending so that the typed stuff is in the clipboard buffer and can be resent when the server starts responding. >Maybe I'll retype it all later, but one that is quick is that I think your >analogy about the boy and the books is seriously flawed. I'm not going into >detail: if you really don't think that was a straw man argument you can say so. > From your perspective, which assumes that using the lump sum of the positional evaluation terms (adapted from human chess theory) in the leaf nodes of the alpha-beta search is quite good (if not the best) use of such chess knowledge, sure, the analogy is thoroughly flawed. From the perspective of a small (at present) minority, the point about using a fine tool in an entirely wrong headed and crude way that the analogy makes, is perfectly suitable. A more general point, of which this was an illustration, is that there is in chess much greater degree of lawfullness or pattern than what mere rules for moving pieces and nominal piece values can capture. Human players, relying on this pattern (part of which is expressed in the chess strategy advice and wisdom) can with entirely trivial amount of nodes searched/examined play at a level equal or above the program which "evaluates" millions of nodes per second. There is clearly a more effective way to use such knowledge than presently done. Obviously, to do so, one has to know what is the contents of such pattern, what exactly are these higher level laws of chess, and how are they used. That is what Botvinik, being one among the handful of strongest human players ever and simultaneosuly one of the most rational and systematic positional players among his peers, was highly qualified to undertake. His Pioneer scheme, even if his coworkers never wrote a line of code to test it, is the best and the clearest attempt to date at characterizing this pattern/laws and their use in algorithmic terms. Let me use another analogy. In combinatorial optimization there is a famous problem, the Travelling Salesman Problem (TSP), which is the problem of finding the shortest closed route a salesman has to make to visit N cities (given their coordinates, or knowing the distances between any two cities). Each city must be visited exactly once, and he must return at the end to the initial city. The exact algorithm (corresponding to alpha-beta in minimaxing) is branch and bound, where all combinations of visit order are tried out, but the computation of the route is terminated as soon as the partial length of a test route computed exceeds the best route length found so far (i.e. if a proposed route, after visiting only a subset of cities is already longer than the best complete route found earlier, there is no point checking out various combinations which visit the remaining cities since the total can be only longer). This method also achieves the same kind of economy as alpha-beta in minimaxing (i.e. the square root of all combinations need to be examined), hence the problem remains exponential in N. Since similar problems occur in practical computing tasks, and the problem is catchy and somewhat addictive even on its own, great many heuristic methods have been invented over the decades, and they can obtain a solution for large N (in thousands) which is within a percent from the exactr solution, but in time proportional to N^2 or N^3 instead of exp(a*N) used for exact solution. Some very effective heuristic methods rely on human-like visual methods used for the problem, e.g. they may partition or cluster the cities, solve the smaller problem for the clusters, then solve each cluster within. But the basic idea in many such methods is to capture and utilize additional regularity in the problem, than what the bare problem statement contains. The basic branch and bound algorithm for TSP (like alpha-beta in chess programming) relies only on the minimal information specific to the problem, essentially only how to generate routes and how to calculate their lengths (otherwise it is a general combinatorial optimization shortcut just as basic alpha-beta chess searcher is a general shortcut for minimaxing using only move generation logic, termination rules and basic material values specific to chess). The successful heuristic methods for TSP utilize much more problem specific information, many more higher level patterns and regularities observed on the optimal routes. This information is used there to guide and restrict the combinatorial brute force search, even though some level of highly constrained search usually remains. The TSP problem gives an example of a computationally demanding problem (when exact solution is sought), but its rules being simple enough that in several decades it has been reserached it was possible to discover higher level problem specific regularities and make good use of them for producing very nearly optimal solutions in a fraction of the time needed for the exact solution. Chess, which has been around and had human theories for much longer than TSP, also has these higher level problem specific laws, and these will be eventually fully utilized (i.e. when the exponential computation is replaced by the low degree polynomial computation) by the chess programs. Strong human players clearly use such laws (consciously and othewrwise) in this highly effective manner. Whatever the current prevailing wisdom says about the Botvinik's scheme, its offspring or something similar to it will eventually drive the present exponential searchers into extinction.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.