Computer Chess Club Archives




Subject: Re: What is

Author: Don Dailey

Date: 11:20:58 05/08/98

Go up one level in this thread

On May 08, 1998 at 10:51:56, Ulrich Tuerke wrote:

>Hi Don,
>thanks for your good and clear presentation. I too think that
>pre-processing is still done in most of the current chess programs, -
>more or less.
>I am considering the possibility of a moderated pre-processing, i. e.
>apply the time-consuming pre-processing evaluation parts not at the root
>but to nodes at depth - say - the half of the aimed search depth. Thus,
>one would still have the main advantage, saving a lot of calculation
>time, because the number of pre-processed nodes would still be a
>microscopic fraction of the leaf nodes. But on the other hand, the
>preprocessed knowledge would be applied to nodes being far closer to the
>leaf nodes (in your examples "only" 30 ply instead of 60).
>Opinions ?
Of course!

I've been considering this type of approach for a  long time.  I think
it definitely has  merit.  It would  suffer from  hash table anomolies
but there are ways to deal with this, the primary one to simply ignore

There are  a few issues to  be dealt with  but my feeling  is that you
might do well  with this scheme.  Scoring  might be a  little fuzzy at
times but  that  might   be ok.   Probably  the scheme   would   be to
pre-process at  every  level up to  MAX_DEPTH-n, where  n might be  at
least 3 or  4  ply.  Keep in  mind  that pre-processing  is incredibly
expensive even compared to fully  dynamic processing, so I suspect you
cannot get  very close to the leaf  nodes.  Of course  3 or 4 ply away
might be considered pretty close if you were doing 14 ply searches.

The way I  got this idea (which  I'm  not taking credit for)  actually
began when I was programming the ending Bishop and Knight vs King with
a pre-processor program of mine a few years ago.   This is a very good
exercise for someone wanting to understand some of the difficulties of
pre-processing.  I tested with a 3 ply  search, and tuned the thing up
to do  quite well.  But it turned  out to  not work  so  well at other
levels!  It turns  out that no matter what   table you use,  they will
tend to be  optimum for a  specific depth only.   It  was almost funny
because I kept figuring out which positions in the search were messing
it up  and  changed the tables  accordingly.  These  anticpations were
wrong though  at different  levels!   Finally though, I  extracted the
most general principles possible and  created a table that was  almost
static (not based as much on the placing of the pieces) and it did ok,
but not  as good as the depth  specific ones.  Some preprocessors will
suggest a move and for this ending that would work  well.  The idea is
to give a bonus to  the whole search  tree under some  root move.  For
example I might give a small bonus for playing  any move that's a pin.
This  makes the program  slightly in favor  of playing  a pin.  I call
these  "ply 1 incentives" but  there is probably  better terminolgy in
use.  This is also another example  of preprocessing.  See note at end
of my post.

But your  idea would  solve this and  a  host of other  problems.  You
could  make very good guesses   about configurations  if you knew  the
possible changes to a position were limited.  This would enable you to
be more aggressive about the knowledge you included.

Here is another   idea I had  which is  more dynamic  in nature.    It
occured to me  that you could  do a fairly  decent evaluation based on
pawn position only, the idea being  a separate set  of tables for each
pawn structure.  These could be built by the pawn structure hash table
mechanism most programs have.   Most pieces can  be judged well by the
pawn structure around them.

I did not persue  this idea because I  fear it may  still be too slow.
If my hash  table  hit  rate was say    95 percent, then 1/20 of   the
positions  would require  a preprocessing  step.   While this does not
seem like much, you have to consider  how expensive building the table
is.   It's pretty expensive,   especially if you want  sophistication.
But on top of that  you must recompute  the positional score each time
the pawn structure  changes, EVEN if the table  itself is ready to go.
This  should be  fairly fast  but not  trivial in   terms of execution
speed.  You have already  killed some of the  speed you were hoping to
gain.  Is the  idea  workable?  Maybe,  but I'm skeptical.   Also, the
pawn  structure is not the   only relavant factor  so other evaluation
would not doubt be  needed.  For instance you cannot  look at the pawn
structure and know  for sure that  you  are in an endgame.   You could
make assumptions though  based on the exact position  you had when you
built the tables.

Anyway the idea  is interesting and I sometimes  toy with  the idea of
trying it out, but it seems like a lot of  work for an experiment that
may not pay off.  It could well be that someone dedicated to making it
work might find imaginative solutions to some of the problems.

But your idea may be more sound.  It may very well be that some are
using it.  I've never heard of a specific example of it though.

- Don

I think an  old program  called "Tech"  used  the method of "ply 1  move
incentives." There  was no  evaluation at  all (the  way I remember it)
except for material value and a list of moves was prepared in order of
goodness from a pre-processor at the  root.  The program simply played
the first move it came to in the  move list that was  at least as good
tactical as any other.   I'll bet Bob Hyatt knows about this one.  I
had a nice dicussion once with the programmer when he was giving a talk
on hash table collisions in chess programs.

This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.