Author: Graham Laight
Date: 02:13:43 10/14/98
Go up one level in this thread
On October 09, 1998 at 12:47:00, Bruce Moreland wrote: > >On October 09, 1998 at 09:18:31, Graham Laight wrote: > >>Once again, I have found myself pondering on the problem of making computer play >>more like human play. >> >>Setting aside the two obvious drawbacks (speed and getting the knowledge into >>the system), wouldn't setting up a rules base result in evaluating a position in >>a similar way to the way good humans do? (Or at least the way good humans SAY >>they do!) >> >>The way I would envisage it working would basically be as follows: >> >>- a position is presented to the rules base for evaluation >> >>- some of the rules "fire" >> >>- the firing of some rules causes other rules in the rule base to fire >> >>- each time a rule fires, it "scores" some aspect of the system >> >>- further code within each rule then creates a "weighting" for how important >>this rule is likely to be in this type of position (for example, if white is in >>check, and black is to move, then the weighting would be 100% because this is of >>fundamental importance) >> >>- the scores and weightings are used to make an evaluation >> >>If the system had roughly the same rules and weightings as a human player, one >>would expect the system to evaluate positions in the same way. >> >>I'd be interested to read people's thoughts on this. > >This is an example of the "just make it play like a human" solution to computer >chess. The assumption is that it is possible to reduce the game to a set of >positional maxims and tactical patterns, build some sort of system to recognize >and weigh relative values of all of these, and out pops a move. > >I think that humans have a very good sense for positional maxims and tactical >patterns, but they also have a very good sense for knowing when exceptions >arise, and a good sense for knowing when they have to search a position that >requires more investigation, and a system that is able to mimic this is going to >be extremely complicated. > >Here's a specific example that we might be able to use to illustrate some of the >problems involved here. > >Imagine white pawns on a5, b5, c5, black pawns on a7, b7, c7, with the kings at >some random place on the k-side, white to move. This is a classic example of a >pawn sneaker combination, white plays b6 and sacrifices two pawns in order to >queen one. If you decide this is an important pattern, you need to be careful >about where the kings are, in order to avoid exception cases. I doubt it ever >works with the black king on b8, for instance. > >What do you do about this, define all of the exception cases beforehand? That >sounds extremely tedious for little reward. You end up with a lot of time and a >lot of hard coding, just to do one pattern, and even if you can do this here, >you can't do this in the middlegame, where things are even more complicated. > >Do you somehow mark that something is up with this pattern, but you need to be >careful about where the kings are? I don't see what this gains you over normal >search. > >Do you have some general thing that understands when a sneaker is going to >happen, and can understand it in relation to the rest of the board, including >the position of the kings, and of other pawns which might still be mobile? If >you can write that, I'll congratulate you in advance. > >bruce It's easy to see that, for many types of tactical position, brute force search is more efficient than trying to work it out with knowledge. At the present time, tree generation techniques clearly rule the world. However, there are some important drawbacks with this technique: * Diminishing returns. It continues to get harder to squeeze more power out of the existing techniques. * It will probably be 10 - 15 years before the average games player can cheaply command the type of computer power which Deeper Blue had in 1997 * Fundamental problems (e.g. the concept of "forever", or something that's going to be outside the search tree because it is too far ahead) cannot be resolved * A modern computer ought to be able to simulate the working of a human - but faster The kind of problems that Bruce described above are just the type of problem which a well organised knowledge system should be able to handle. If this is not so, how can humans evaluate such positions? Instead of the "Rules" analogy, perhaps it would be better to envisage a "knowledge network", where "information entities" (small pieces of chess knowledge) are linked to other pieces of chess knowledge, with links having different weightings, which can possibly be calculated at runtime, depending on aspects of the position. The entities would be grouped together into blocks that, together, describe a position type, and an algorithm for evaluating that position type. When the process of "spreading" around the network is complete, the values assigned to each information entity can be totted up, the most appropriate "information blocks" selected, and the appropriate evaluation functions used to assess the position. This, of course, borrows heavily from the ideas of neural networking. But it would work. I believe that such a system would have strong benefits: * Easy to add new knowledge and delete old knowledge * Building this system would (IMHO) create a much steeper ELO improvement graph than continued tinkering with game trees, with higher ELO ratings possible before the graph starts to level off * A more human style of play could be expected * A general purpose AI system will be produced, rather than one which only works well in specialist areas. Come on men - let's not miss the boat to the future!
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.