Author: Don Dailey
Date: 09:49:26 01/22/98
Go up one level in this thread
One problem in my opinion with trying to tune against master moves is that we are limiting ourselves to their strength. Keep in mind that in a large database of master games a large percentage will have a weak player on 1 or both sides. By weak I mean arguably weaker than the program. Ok so let's say you expend the effort to isolate the very highest quality games by the very best players only. Now the problem is style. It is completely unclear that our programs should be trying to play chess in the same "style" as grandmasters. We like to believe that there is something universal about the best players style but in fact a lot of this is influenced by the times and who the best players are, it's a human thing. It's the same reasons we all dress the way we do, we don't stray too much from what is accepted as normal and we imitate people we admire (although many believe they are above this.) It is well known that strong players are influenced more by beauty, style and patterns. Given a direct way to win or a "natural" or beautiful way to win the human will choose the indirect route because it is easier for him and pleases him and others. If your goal is to match the style of humans (as opposed to being as strong as possible) then this might work well. I agree 100% with your comments. I noticed that it takes an enormous amount of data to represent some fundamental pieces of knowledge adequately to learn from (for a computer.) A human might learn from a single instance of an example that manifest itself in a powerful way but a single instance get's completely lost in the noise for the typical learning algorithms. I beleive it's primary benefit (if done well) is to get good balanced and relatively correct general principles down pat. Trying to pick up fine points and sophisticated stuff may very well prove to be extremely difficult. Noise is the major problem. I went through a 250,000 game database looking for king and pawn endings. I was looking for data to improve this aspect of my program and wanted to focus on it. I was astounded on how very few there were. My feeling is that the players just resigned or took draws before getting into them. But the point is you should expect to have a very hard time extracting even general principles for this ending if there is almost no representative data! - Don On January 22, 1998 at 06:25:06, Amir Ban wrote: >On January 21, 1998 at 18:35:45, Stuart Cracraft wrote: > >>Is there any good research done on pattern matching >>whole positions? What I'm thinking of is some measure >>for a position to determine how likely Joe-Schmoe master >>would be likely to move into this position. >> >>I still wish someone would post the method used by >>for the Deep Blue/Thought evaluation function tuning. >>Does anyone have a handle on what was done? It seems >>like a pretty good idea to avoid the human labor cost >>of hand-tuning. >> >>--Stuart > > >The way I read the Athanamaran article, the results of the tuning were >not conclusive, and were not actually adopted into DT/DB. > >I was already familiar with the idea of tuning by matching master moves, >and what I think the main interest in the article was the attempt to >formulate the evaluation mathematically so that it is suitable for >analytical optimization methods. > >Years ago I had such a master-move match benchmark, and I ran it fairly >regularly. I became disillusioned with it and I don't run it any more. >It's simply too insensitive, governed by many irrelevant factors, and >today I'm sure that a program can improve by 100 rating points at least >without affecting the match score. > >One thing that bothered me about that article is that they invested all >the effort in the wrong place. If I had a theory that some measure like >master-move matches reflects program strength, I would tweak a >parameter, see how it affects the measure, and do an independent test >such as a program self-test to see if the theory is right. A few such >experiments is all that is needed to prove or disprove the theory. Doing >multivariate least-squares optimization is nice, but if the theory is >correct, I may not need this to put it to good use, while if the theory >is not correct it's a complete waste of time. Of course, my method may >not make a Ph.D. thesis. > >This is my standard answer to all the other optimization, genetic >algorithms and so on ideas: Forget about it for a moment and concentrate >on proving that your measure or theory works. Do the fancy stuff later, >though I think I can make good use of a proven measure even without it. > >Amir
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.