Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Never Say "Impossible"

Author: Robert Raese

Date: 16:09:10 05/03/01

Go up one level in this thread


On May 03, 2001 at 10:49:02, Robert Hyatt wrote:

>On May 03, 2001 at 10:27:10, Graham Laight wrote:
>
>>On May 03, 2001 at 08:52:44, Robert Hyatt wrote:
>>
>>>On May 03, 2001 at 06:53:26, Graham Laight wrote:
>>>
>>>>
>>>>
>>>>There are 2 glaringly obvious points to be made in reply here:
>>>>
>>>>1. If the weights of the evaluation components are wrong (or if, as seems more
>>>>likely, the program doesn't modify the values of the weightings according to the
>>>>type of position as well as a human), this still represents a knowledge deficit
>>>>
>>>>2. The research discussed in "Chess Skills In Man And Machine" indicated that a
>>>>human GM has expert knowledge on about 50,000 positional patterns. Crafty (and,
>>>>I'm sure, any other program) has nothing like that number of evaluation
>>>>components (clearly 1 evaluation component <> 1 positional pattern, but there's
>>>>probably a correlation)
>>>
>>>This is probably a bit of "apples and oranges".  IE humans recognize a
>>>"fork" by pattern.  A program recognizes it with a 3 ply search.   Ditto for
>>>overloaded pieces and a host of other things.  Some things have been relegated
>>>to the search to handle, others have to be done within the eval.
>>>
>>>I certainly agree that the computers are _way_ behind humans in total knowledge.
>>>But the computers are ahead in some types of tactics (they are behind in some
>>>of course).
>>
>>
>>The problem now seems to be that humans are learning how to guide chess games
>>into the type of position where the computers are weak. This will be a position
>>where the program cannot evaluate the nodes accurately enough because:
>>
>>1. In terms of the knowledge the computer has, the nodes all look to be of
>>similar value
>>
>>2. There's an important piece of positional knowledge which the human has, but
>>which the computer doesn't, for which the consequences are over the computer's
>>horizon
>>
>>The remedy would be, of course, to find ways to acquire and manage knowledge.
>>
>>If, for some reason, game tree search had proven to be impracticable for chess,
>>then it may well be that static evaluation would be extremely sophisticated by
>>now.
>
>
>I don't believe tree search has proven to be impractical.  IE I don't see much
>of a problem with blocked positions today, because I have a lot of code that
>specifically avoids such positions.  For example, crafty understands pawn
>levers quite well and avoids positions where it has few or none.  Not that
>this doesn't need more tuning, but the basic "knowledge" is there.  I don't
>see any reason why the current alpha/beta framework won't eventually produce
>a program that no human can beat or even draw.
>
>
>
>
>>
>>
>>>>
>>>>
>>>>I would reconcile the contradiction by reminding you that I'm not suggesting the
>>>>use of a single technique, but rather a combination of techniques.
>>>
>>>If you look at traditional data mining applications, you generally find that
>>>the humans know what they are looking for.  IE one project here was to look
>>
>>Not necessarily. Data mining can show up correlations which the operators
>>weren't expecting. When researchers were trying to find the cause of the
>>increase in lung cancer in the 1950s, they tried correlating all sorts of
>>variables to the disease. However - each one would have had to have been
>>individually requested by a human. They did, of course, discover the good
>>correlation with smoking, which lead to all the research to uncover the links.
>>
>>If they'd had the same statistics today, a data mining system would almost
>>certainly have identified smoking as the highest correlation to lung cancer
>>without a human having to specify that this correlation in particular be
>>calculated.
>
>Yes... but you are looking at one particular "feature" of medicine...  lung
>cancer.  That doesn't seem to apply to chess.  If you already know the "feature"
>then you can code something for it.  In chess, it will be "why did I lose" and
>I don't think mining is going to help there.  At least not for a long time.
>
>
>
>>
>>Caveat: data mining is not my field of expertise. I'm still quite confident that
>>the above is correct, though.
>>
>>>at treatment in the ER for bacterial infections and outcomes.  This might have
>>>(years ago) uncovered the fact that young kids, asprin, and flu often lead to
>>>a serious interaction.
>>>
>>>But chess seems different.
>>
>>
>>I think that the difference is that in chess, it's not immediately obvious that
>>you're searching for correlations between things. I would argue that this is
>>what you actually are doing, however.
>
>yes... but in "mining" the "things" are known.
>
>
>>
>>Suppose, for example, that having your king near the centre of the board when
>>the queens are removed is good. You could get a database of chase games, and
>>correlate king proximity to the centre when the queens are removed with the
>>final score. If there is a correlation, then you've obtained some knowledge. The
>>weighting you would give this knowledge might be related to the strength of the
>>correlation.
>
>That is my point.  You have already picked out the "feature" you want to study.
>I think all (or nearly all) of the "obvious features" are already well known
>and present in most chess engines.
>
>
>
>>
>>Obviously, to make this work, you'd need a method of generating patterns to work
>>with. Maybe genetic alorithms could be used for this purpose.
>>
>>
>>>>In my own work with AI problems (not chess related unfortunately), I've found
>>>>that some problems cannot be solved easily by using a single technique, but that
>>>>by combining a combination of techniques, they can be resolved surprisingly
>>>>well.
>>>>
>>>>For example - maybe an NN could be trained to guide the search for significant
>>>>patterns in chess positions (and self improve on the job when it starts the real
>>>>work).
>>>>
>>>>It is an unfortunate aspect of chess that other techniques have worked
>>>>sufficiently well to have prevented interest in the real intelligence - the
>>>>evaluation of positions - from becoming the major focus.
>>>>
>>>>-g
>>>
>>>
>>>I wouldn't say that at all.  if you look at my evaluation code you will
>>>probably conclude that "knowledge" is considered important in many places...
>>>Both general-purpose knowledge _and_ special-case knowledge.  I can't speak
>>>for everybody, but _my_ program has gotten "smarter" over the years.  For
>>>a reason...
>>
>>I assume you remove some knowledge as increasing NPS renders it irrelevant.
>
>Not in the last 5 years.  I don't put knowledge _in_ if I believe that search
>will eventually make it obsolete.  The knowledge I put in is the kind of
>knowledge that I believe the search will _never_ be able to compensate for until
>the search can reach the end of the game.
>
>
>
>
>>
>>But also, as the standard of your opponents becomes higher, you need to cover
>>more of the positional cracks - so you need to add knowledge to stay in the
>>game.
>
>That is what I do every day. :)
>
>
>
>>
>>This is good - but it's still not enough. A question posed in "Chess Skill In
>>Man And Machine" was, "Why can't a computer be more like a human?". To rephrase
>>that question for Crafty, "Why can a machine that can do both selective search
>>and a good evaluation of half a million positions per second still be beaten by
>>people who can only evaluate 2 or 3 positions per second?".
>
>
>I despise that question, but I will answer "Why can't a computer be more like
>a human?"  Answer:  Because we don't know how humans do what they do.  And until
>we do, emulation is _impossible_.
>
>
>
>>
>>Richard Lang told me at WMCCC 2K (as many other programmers have said) that his
>>biggest frustration was that Genius had reached a certain level, and he found it
>>very hard to get stronger, because if he improved the eval in one area, it would
>>tend to get worse in another.
>
>I don't see this myself...  It is just a slow process to figure out why the
>program loses games.  Used to be easier.  Nowadays it is hard for a GM to point
>out where it went wrong in many cases...
>
>
>
>>
>>At the same event, Franz Morsch (no less!) also told me that it was becoming
>>very difficult to raise computer chess above the current standard.
>>
>>If this is the general situation, then those who master knowledge acquisition
>>and management (the essence of intelligence) could well be richly rewarded.
>>
>>-g
>
>
>They have their opinion.  I have mine.  I have watched slow and steady progress
>for the last 30+ years.  I expect it to continue for the next 30 at the same
>rate...

interesting... so you are saying that the technology explosion has really had no
accelerating effect on chess programming?  one would think that with
signficantly faster processors, testing cycles at least would be accelerated...
?





This page took 0.06 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.