Author: Ratko V Tomic
Date: 14:10:34 10/18/00
Go up one level in this thread
> When I say I don't see the need for that, I'm >stating my belief that it is possible to get there by using the same evaluation >framework I already have, and, by implication and guess, what others already >have. There are couple major problems with knowledge/evaluation functions as they're currenlty being practiced by the playing progrqams (at least the published info). One is the load distribution -- they tend to be 'leaf heavy,' the bulk of work is at the leaves, replicating thus theor computation to the maximum extent. That tends to make adding new terms a balancing act between node count and value obtained from the extra term. And testing for the cost/benefit ratio requires lots of games to see any finer differences. The alternative, 'root heavy' preprocessors have obvious problem of basing their evaluations on the facts/features of a position which may disappear deeper in the tree. Another major problem is how does one expand knowledge. Guessing a function, testing it, tuning it is not very productive, it takes lots of work and chess expertise on the part of programmer to add little knowledge. Creating a chess language interpreter which can take the knowledge statements and then adjust the search engine parameters, evaluation functions does help on the second problem -- it offers systematic way to automatically translate the knowledge from the human to computer form and transfer it into engine control functions. But it doesn't help with the load balancing problem, since the knowledge is still passed to either root preprocessors or the leaf evaluators. The lanuage helps the human interface but not the engine operation, balancing and tuning. I don't know of other than Botvinnik's approach which aimed to cover both major problems for a general playing program (yes, some planning code was done for mate or combination finding and other for some KP endgames, but not for a general strategizing program other than Botvinnik's work). Their load balancing distributes evenly the knowledge related computation across the depth of the upper level search tree (the small but deep human-like tree in the upper layer). Their main practical failure was having to hard-code every bit of knowledge into custom functions which had to interface with the lower level a-b search. That got out of hand fast, and they couldn't make it work that way. Had they created a langauge interpreter first, to automate the process of knowledge transfer from human to their upper layer search (at the expense of some performance hit vs hard coded functions), they might have completed the project. But their basic load-balancing was the best proposal I have seen published, and they had that part working with what little knowledge code they could hard-code into the prototype. Disregarding what they managed to get actually working in the program, the overall scheme still stands as the most comprehensive (in the width of coverage and the specificity of proposed solutions) and the most far looking effort as yet.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.