Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Symbolic: Search, planning, and a prospective

Author: Vasik Rajlich

Date: 06:12:44 03/15/04

Go up one level in this thread


On March 15, 2004 at 05:06:09, Steven Edwards wrote:

>Symbolic: Search, planning, and a prospective
>
>A traditional iterative A/B chess playing searcher has as its search space a
>tree with positions for the nodes with moves for the arcs.  Great importance is
>assigned to the metrics of (high) nodes per second and (low) mean branches per
>node, as otherwise a reasonable depth cannot be achieved.  And getting that
>depth is critical as the program is searching blindly, i.e., without a plan, and
>so relies entirely upon the search for discovery.
>
>A traditional program does work, and the deeper it can search, the better it
>plays.  But the main reason it works is because it can push its horizon further
>than a player (human or program) that searches to a lesser depth.
>
>So it is not surprising that some authors have an almost obsessive attitude
>towards having the fastest move generator, the lowest branching factor, the
>quickest evaluation function, and a never satisfied yearning for ever swifter
>and more numerous processors.  All of these can help improve the playing
>strength of a traditional iterative A/B program.
>
>But little, if any, of the above has anything to do with artificial intelligence
>or with making contributions to other areas of research.  Neither is there much
>here that can be seriously considered novel; the number of significantly new
>ideas in the field over the past decade is embarrassingly small.  The big
>mystery to me is why intelligent people continue to work on the same old model
>that is now thirty years old; indeed, a model older than some of today's
>authors!
>
>I suppose that one of the attractions of the traditional iterative A/B approach
>is that it is a low risk track.  With relatively inexpensive high speed CPUs and
>plenty of sample source on the Internet, a somewhat competent coder is almost
>guaranteed produce a reasonable chess player in a matter of weeks.  The problem
>is the coder then spends the rest of his/her efforts on speeding up the program
>(that's the source of the short term gratification); the alternative of slowing
>down the program by tackling the harder task of adding chess knowledge is rarely
>considered.  Again, there's no surprise that nearly all of today's programs are
>more or less variations of the same theme.
>
>It's not too much of an exaggeration to suggest that if chess playing programs
>celebrated Father's Day, then Chess 4.x (Slate and Atkin, mid 1970s) would have
>a mailbox jam packed with greeting cards.
>
>--------
>
>Why is planning so important?  Paraphrases from chess literature:
>
>"Learning to plan is the beginning of chess mastery."
>
>"It is better to play with a poor plan than with no plan.  The worst is to play
>aimlessly."
>
>"A chessmaster does not consider thousands of variations with each move;
>instead, he formulates a plan and uses it to find the right move."
>
>"There is evidence that master level chess players have, from long experience,
>memorized [at the subconsious level] thousands of patterns and their associated
>actions.  These patterns are what distinguish a master from the average player."
>
>"It may be that the strongest chess players have up to fifty thousand patterns
>in memory that are available for evaluating a position and producing a plan."
>
>"The number of positions analyzed per move by strong players is usually between
>one hundred and two hundred, not the great multitude examined by computers."
>
>"Do not try to find the best move directly by calculation.  Instead, identify
>your advantages and disadvantages in the position to produce a set of goals.
>Find a plan that will meet these goals, or at least the most important goals,
>and then select the move that best fulfills your plan."
>
>--------
>
>My effort, Symbolic, has a search space.  Its search space is a plan space and
>not a position/move tree.  There is a position/move tree, but the plan space
>search wholly governs its growth.  The tree is not for discovery, but for plan
>verification.  Absolutely no nodes in the tree are expanded unless the plan
>search process directs such expansion, one node at a time, and only for good
>reason.  The node count and the node frequency of the position/move tree is
>merely an epiphenomenon of the actions of the plan search and so cannot be
>compared to similar metrics of a traditional A/B searcher.  In fact, given a
>goal of modeling human behavior, an excessive node count or node frequency is a
>sure sign of failure.
>
>Some comparison points:
>
>1. Symbolic is not an iterative searcher.  Once a tree node is generated, it
>stays around for the duration of the search.  It may be revisited by different
>plans, but it is never regenerated.
>
>2. Symbolic is not an A/B searcher.  It does use minimax on plan expectations
>moderated by uncertainty functions, but A/B bounds are not necessarily
>associated with tree nodes.
>
>3. Symbolic is not dependent on move ordering.  It is rather dependent on plan
>ordering, but the intent here is support multiple simultaneous plan verification
>with resources dealt to competing plans based on their estimated value in
>determining ply one move selection.
>
>4. Symbolic is heavily dependent on chess knowledge.  This is manifested by its
>pattern library and its associated matching process.  I would estimate that
>perhaps four fifths of its processing time budget will be spent in the pattern
>matcher.
>
>5. Patterns in Symbolic are used for three purposes: to formulate plans, to
>assist with plan verification, and to provide evaluation of quiescent positions.
> Only the plan verification process allows node expansion in the move/position
>tree.
>
>6. Symbolic is designed to make adding chess knowledge easy.  In contrast to a
>traditional searcher, there is no penalty for adding knowledge; adding the right
>knowledge will improve plan formation and verification and this will speed up
>the whole search process.  One large chuck of evidence for this is from a
>related incremental improvement in Wilkins' planning program Paradise.  Compare
>this to a common experience with traditional A/B programs: removing chess
>knowledge can improve (i.e., speed up) the program if that knowledge has to be
>calculated at each of millions of tree nodes.
>
>7. In its early stages, Symbolic will have many, many gaps in its chess
>knowledge.  However, Symbolic also has a narration and summation facility and so
>these gaps will be easy to identify; once identified, the appropriate patterns
>can be added to its pattern library.  Again from the experience of Paradise,
>adding patterns is not too hard and does not significantly slow the matching
>process.  There is also the possibility of automating this process to some
>extent.  (Using Lisp helps a lot with this.)
>
>8. Symbolic's pattern library can be stocked with gateway patterns.  Gateway
>patterns are not unlike some genes that specify controller proteins that in turn
>selectively activate or deactivate regions of DNA.  Used this way, a matched
>gateway pattern can perform broad pre-selections of other patterns for
>eligibility for later pattern matching.  (Examples: no sweepers -> skip pin
>patterns; no ortho movers -> skip back rank mate patterns; at least one knight
>-> activate knight fork patterns; has pawns -> activate various secondary pawn
>gateway patterns.)
>
>9. Hard Decision #1 is picking the right representation for patterns, which is
>strongly related to Hard Decision #2, picking the right representation for
>plans.  (Again, using Lisp helps a lot.)
>
>10. Hard Programming Task #1 is designing a fast pattern matcher with sufficient
>power to handle all the matching needs in reasonable time.  Paradise spent a lot
>of its time in its matcher and I expect the same of Symbolic.  There is the
>possibility of relegating some of the matching work to the ChessLisp interpreter
>(all C++ code) if Lisp code is too slow to handle all of the pattern matching by
>itself.  The preference is to use Lisp and have the C++ code option as a backup
>strategy.
>
>11. Hard Programming Task #2 is designing the plan verification process.  This
>includes a number of Hard Programming Subtasks: plan selection, plan execution,
>plan fault determination, plan repair, and plan scoring.
>
>12. Hard Programming Task #3 is designing a causality/similarity facility.
>Having this is a prerequisite for any claim to simulating human chess playing
>behavior; it is a major weapon against the ever present threat of combinatorial
>explosion.
>
>13. Hard Programming Task #4 is designing a suitable quiescence process.  Unlike
>the specialized quiescence search in Paradise, Symbolic should have the idea of
>quiescence much more integrated with the overall plan verification process.
>Again, a motivation here is to somehow model human behavior in determining and
>evaluating quiescence.
>
>--------
>
>Will the effort succeed?  I think so, at least to the extent of meeting all or
>nearly all of the twenty primary goals I mentioned earlier.  I am confident that
>Symbolic can meet or surpass the test suite results of Paradise with similarly
>small searches and with a speed increase of a factor of one thousand (this
>includes a factor of about a hundred because of advances in hardware processing
>capability since Paradise was written).
>
>Several of the primary goals (e.g., portability, opening library, tablebases, a
>Lisp interpreter with high speed chess specific intrinsics) have already been
>met.  Yet there is so very much work needed with the Hard Parts that has only
>just started.  It may take a couple of iterations to get the "right" pattern and
>plan representations.  Because my ideas for plan data structures and associated
>functions are far different from those in Paradise, there is a lot of uncharted
>territory.  And the AI planning literature is not all that helpful as most of it
>does not deal with sequential two person adverserial search.
>
>I am reminded of the adage:
>
>"Once an Artificial Intelligence program starts working, then what it does is no
>longer AI."
>
>My belief is that this accurately describes just about all of the chess playing
>programs of the past couple of decades.  And so comes the lure of building
>Symbolic: to do Something New.
>
>And just maybe, around the year 2034, computer chess researchers will speak of
>Symbolic as (1) being successful, (2) with encouraging new areas of research,
>and (3) having ideas now too old fashioned and too widely imitated.  Then they
>will rightly call for Something New.
>
>The road is long.


Hi Steven,

I enjoy the posts which you make here. I also aim in the long run to have a
program which is "knowledgeable", and would much prefer to deal with
higher-level concepts than with things like registers, cache hits, etc. The
question is: how to get there?


First of all, I think there are quite a few myths about the nature of chess -
how you need to "plan" and "think", etc. Is that really so? Chess evaluation is
pretty simple - you create some weaknesses, you put your pieces on some useful
squares, you attack against the king when you have more space or some weaknesses
to shoot against, you push your passed pawns, etc. Some time ago I
spoke about this with Greg Kaidanov, a professional chess coach who has trained
students of every level. He made some interesting claims.

One was that only top-level players should study the games of Kasparov. The
reason was that Kasparov doesn't plan, he doesn't play normal proper chess, he
doesn't look at patterns, he breaks all of the "rules" - and he can do so
because he counts everything (or prepares it at home).

Another interesting claim was about Watson's book "Secrets of Modern Chess
Strategy", where Watson claims that basically there are no such things as chess
rules, chess patterns - it's all just a big myth, used to explain things we
don't understand, and that concrete counting is everything. Kaidanov claims
that, like studying Kasparov, the book is totally inappropriate for all but the
best players. Weak players should generalize, play stereotyped chess, etc - but
only because they can't count.

So, given the choice between one chess entity which can plan, generalize, and
behave like a normal non-genius human, and another chess entity which can just
count everything - well, evidence suggests that you should bet on the latter.

Now, you could concede that yes, chess is a very unusual activity, where
counting is everything, but we don't develop chess programs to play good chess,
we develop chess programs to sharpen our tools for "real problems". And for real
problems, we have to plan, to generalize, to match patterns, etc.

Here, too, I would say: is that really so? Let's say that you are an individual
who "understands people". Do you do it by understanding some sort of
psychological principles? Maybe not. Maybe intuitively you're somehow
simulating other people's thinking on a near-atomic level. Yes, this may sound
insane, but this is in fact one very popular theory floating around the world
of academic philosophy these days.

Until these issues are somehow convincingly solved, it seems very difficult for
me to accept that a chess program can have some goal other than strength. I
can't agree that high-level Symbolic is in any way more human-like than
assembly-coded high-NPS Fritz. The only way in which it's really more human-like
is that its skill level more closely matches that of the normal human
population.


Note also that within the alpha-beta more-or-less-brute-force approach there
seems to be quite a bit of room for improvement at guiding the search and
improving the evaluation.

For example, some attacks in chess are just crushing. You look at the position,
and you know it's over. In principle, a good evaluation would give such a
position +10 or whatever, however no current programs will do this. In fact,
none of the commercial programs which I own have very good evaluation functions.
There is a lot of room for improvement here, and if you look at
the commercial programs you will see that they do keep making progress. They
have to balance work on their evaluation function with its speed, and when they
consider high weights they have to think about the exceptions, etc, so progress
comes one drop at a time, but it's there.

I am also quite sure that the commercial programs haven't exhausted the topic of
selective search.

It seems to me that the existing framework is quite rich in possibilities, and
there are quite a few clever people who think the same.


Anyway, good luck with your program. Any additions to the computer chess
programmer's arsenal would of course be great ...

Cheers,
Vas



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.