# Computer Chess Club Archives

## Messages

### Subject: Re: likelihood instead of pawnunits? + chess knowledge

Author: Bob Durrett

Date: 12:22:44 10/26/02

Go up one level in this thread

```Before adding a "Post-breakfast" continuation, let me express my impressions

On October 26, 2002 at 14:33:13, Ingo Lindam wrote:

>On October 26, 2002 at 10:28:07, Bob Durrett wrote:
>
>>This is to check to see if I understand your idea:
>>
>>One could go to a very large collection of high-quality master games [Megabase]
>>and do some research on patterns.  For each pattern, one could first identify
>>all games in which that pattern occurred.  Then, in each game, one could
>>estimate your three probabilities at the point in the game where that pattern
>>first occurred.
>
>I would rather say to estimate the probabilities over all the games the pattern
>"occurs" (you define that in several ways, denpending on whether you are
>interested just in static patterns or also in patters just occuring in a single
>position or just in pattern occuring after move 15 or just occuring in
>midgame,...)

OK.  If the pattern occurs in many games, then you have a large statistical
sample.  But my gut feel is that it would be a terrible blunder to just use game
wins/draws/losses to get your probabilities [and other measures] for each
pattern.  Analysis of how the pattern affected the game result would make a lot
more sense to me.  The outcome of the game might easily be caused by weak moves
or blunders later in the game.

>
>>Repeat for all patterns of interest, to produce a table of
>>data.  The first column of the table might be a name of the pattern and the
>>statistical information such as confidence levels.  Each row would be for a
>>different pattern.
>
>The pattern might have just a representation without a name at first stage.

Well, I'm not sure what the difference is between "a representation" and "a
name."

>
>>Am I on track so far?
>
>You are on the track. Ofcourse it might be possible to have a very limited
>class/set of pattern first that occur very often and then enlarge the set of
>pattern by generating more special/complex pattern that occur less often
>offering more significant probabilities.

Seems reasonable, but with reservations.  If the number of patterns becomes too
small, then the set of patterns might not "cover" very many of the positions
occuring in future games.

Not clear how you would do the "enlarging."

>
>
>>One could extend this idea to identify degree of correlation between a new
>>pattern [which unexpectedly occurs in a game being examined or played] and one
>>of the patterns in your selected set of patterns.  There would have to be
>>criteria and a method for computing the correlation numbers.  [This could get
>>messy.]
>
>Yes, that could get messy... and I dont like the idea of evaluating pattern (too
>much) that I don't have seen in my data base. The main idea of using pattern is
>to use the experience of million of games to evauate positions that I have never
>seen, but that are "similar" (at least in some pattern) to a sufficient number
>of games.

I feel that limiting yourself to exact matches would be unnecessarily
constraining.  The practical implication might be that the number of patterns
might become excessively large.  [How much is excessive depends on the
technology used.]

>
>>The next logical step would be to compute the probabilities for the new
>>position.  This set of probabilities [a probability vector?] might be regarded
>>as being a function of the similar positions.  Generally, one would expect that
>>there would be several or many positions in your position database which would
>>be regarded as being similar enough to be considered.
>
>Yes there should be a lot of patterns in most positions I have to evaluate that
>I can evaluate by the experience of data base.
>
>>Am I still on track?
>
>You are!
>
>>Incidentally, the programmers have the trivial [? : )] task of figuring out how
>>to make all this work.
>
>Certainly not trivial. And certainly some work to do for the computer, but
>atleast I may be sure there are pattern saying something about the position (we
>use a lot of them in every game... perhaps not in every game)
>
>>Back to the idea:
>>
>>All of this must be done for each move.
>
>Not generating the pattern and estimating their probabilities, but estimating
>the probabilities of the positions I want to evaluate, yes!

Maybe some clarification would be helpful here.

>
>>Would you still have search algorithms?
>
>Yes, ofcourse!

Why "of course" ?????  Unclear.

>
>>If so, then all this maybe would have
>>to be done at each move in a string of moves being evaluated by the searching.
>>This all appears very interesting for the future computers where there might be
>>millions or billions of microscopic microprocessors on a single chip.  One >could have each of these microprocessors dedicated to a single pattern in your
>>database of patterns.
>
>Well, ofcourse I have to do a lot of evaluation for the positions I want to
>evaluate this way... but ofcourse the is also a lot of work that is of use more
>than one time in the search tree... and I dont think of using complex patterns
>for all positions of current search trees. It is possible to use them just for
>within a certain scope in order to get a very valuable pruning of the tree. But
>ofcourse if you are right with your optimistic look into the future of computer
>development I could do much more.
>
>>Well, my wife is hollering for me to come eat breakfast, so that's it for now.
>
>Ok, lets return to real life... for me it will be dinner soon. I will bake  some
>pancakes for my wife before she comes home from work. I guess they will be much
>more appreciated than my weird ideas. ;-)
>
>Ingo

A couple of general remarks:

(1)  I prefer Blueberry Belgum Waffles with whipped cream on top.

(2)  Stepping back several steps and looking at what we're talking about might
lead to some general observations.  For example:
(a)  It is not easy to predict the course of technology in the future.
Right now, there seems to be progress in both complexity and speed.  Complexity
favors parallel processing.  Speed favors sequential or serial processing.
Progress in both complexity and speed might lend itself to hybrid computers
which take advantage of both technologies.  For example, if microscopic
processors were to become a reality [i.e. available at acceptable cost], then
parallel processing utilizing many of these might be practical.  A hybrid might
be to use mostly serial processing in the microscopic microprocessors but
parallel at the higher level.
(b)  The driving factor in current-day chess-playing programs on PCs is the
amount of time it takes the chess engine to arrive at it's next move.  If the
amount of time allocated is very small, then severe compromises must be made,
reducing the performance level of the engine.  This same principle would apply
to parallel processors, it seems to me.  If you used a predominantly parallel
computer, there would still be some time required to produce the move.  The
promise of parallel processing, however, is reduction of this time by the
strategy of "devide and conquer."  That is what I see as the promise of your
strategy of using patterns.  On the other hand, if you look at each pattern
sequentially, one at a time, your program will be non-competitive to say the
least!
(c)  That is why I see parallel processing as being the only practical
application of your idea.  I especially favor having one tiny microprocessor for
each pattern.

Bob D.

```