Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Artificial Intelligence in Computer Chess - *DETAILS* as promised

Author: Artem Pyatakov

Date: 15:55:53 03/28/04

Go up one level in this thread


Steven,

Thanks very much for replying and being curious! It's nice to hear to from
someone with so much experience. (Note: Closer to the end of the messsage, I
provide some actual details of my research.)

>Many undergradutes report the same problem.  Recent scientific surveys suggest >a link between missing time/missing money and the typical undergradute's
>preoccupation with the pursuit of women and alcohol.  I myself would selflessly
>volunteer for further research into this profound mystery, but I am unable to
>convince any funding agency to award a grant.

I am glad to see I am not alone in pursuing research in this particular area. I
think there is a special fund dedicated to this research inside the Ivy League
:-)

>In my opinion, you are essentially correct in your assessment of the
>"bag-of-tricks" approach for traditional A/B searchers.

That's a good way to summarize (I might steal that term if you don't mind). I
believe that some tricks are unavoidable without much stronger AI instruments -
for instance the evaluation will probably have to stay partially human-designed
to be strong for a while. However, I think that given a strong evaluation
function, a program should be able to learn some things about move ordering and
extensions on its own. (This is probably easier if the evaluation function
returns a list of scores of particular features (like king safety) instead of
just a single number).

For some other people reading this - this is a good summary of many of the
tricks: http://www.cs.ualberta.ca/~jonathan/Grad/Papers/acm-final.ps

>
>>On the other hand, I think a lot of researchers have been overly ambitious and
>>have tried to replace Alpha-Beta & tricks with a neural network or some totally
>>different approach. I think that with the current state of AI tools, these
>>efforts are bound to fail.
>
>A lot of researchers?  Other than myself, I don't know of any other workers who
>are attempting a complete and competitive chess playing program that doesn't
>tread the oft traveled A/B bag-of-tricks road.

That's good a point (will probably include in paper) - not many are working on a
complete chess program, since most just address a particular section, like the
evaluation function. But one old effort - MORPH - comes to mind, and I think I
noticed a couple of other failed Neural Net efforts (don't have references
handy, but I can look).

>Bound to fail?  I've been doing programming for thirty-five years and computer
>chess for for almost thrity, and I don't feel I have enough experience to make
>such a claim.

Ah thanks for catching me on this categorical statement; I should really clarify
here - what I meant to say is that, if I had tried to do something like this
(discard all of A/B, etc.) within the course of a very limited year, I was bound
to fail completely :-) Meaning that I think these efforts might work later, but
the tools of the AI world simply do not less replace the whole program with an
AI personality and have it work. This of course might eventually be possible.

>My thought is that the better idea is to use chess knowledge extensively at the
>start of a search instead of applying the knowledge to a search that threatens
>to grow out of control.  An ounce of prevention is worth 453 grams of cure.

Could you give more details on what you mean here? I know there have been many
efforts to do selective search - including a famous one by Botvinnik, none of
which have resulted in very competitive programs (despite great chess prowess of
the authors). What are you doing differently? What do you think these others
have missed? What kinds of results have you gotten?

>... more details are needed.
OK, glad to oblige.

One thing to clarify - my time this year to work on the experiments was very
limited, so the results should be taken with a grain of salt.

As I mentioned before, I decided to concentrate my research on move ordering for
a couple of reasons:
1) Move ordering promises big payoffs if done right.
2) Move ordering involves a lot of "tricks" (bag-of-tricks), which I don't think
are really necessary in theory (unlike eval)
3) I wanted to apply AI instruments which are notorious for not being 100%
accurate, and this fits the problem really well, because if there is a move
ordering mistake once in a while, this will not lead to anything drastically
bad. We just need most moves ordered well.
4) This seemed to be an underdeveloped area (I only found research by Schaeffer,
and then everyone else uses these same several heuristics - killer move, history
heuristic, following the previous PV in Iterative Deepening)

Just to get this out of the way, one thing I tried first was using the
transposition hash table to store more info. I did not really concentrate on
this, and after experimenting with a couple of different table replacement
strategies, looking ahead one move and seeing if there are interesting entries
in the hash table for positions one ahead, I abandoned that part of the
research. Does anyone know of any interesting advances in these areas? I am
talking about anything that is not in every engine (such as Crafty), but that
seems to be promising.

My main research interested was motivated by the question: if the history
heuristic works so well, then why hasn't anyone made any enhancements to this
very simple and limited method?? Schaeffer says that combined with the
transposition table, this leads to 99% of the gains of all other heuristics such
as killer move and aspiration window (interesting sidenote: I did not notice
searching the old PV first during IDA* in the list of heuristics, and that one
actually results in very big gains for me).

What is the idea behind the history heuristic? The basic idea is that similar
moves will work in different parts of the tree. However, the history heuristic's
big limitation (in my opinion) is that it absolutely does NOT take context (the
board) into account. Ideally, we would want the same BEST move to be tried on
similar boards, but probably would not want to try the same move on totally
different boards. (In addition, the amount by which to increment the area and
when to "forget" things seemed very arbitrarily chosen to me.)

This motivated me to think of a notion of board similarity. One fact is worthy
of note: transposition tables take the notion of similarity very far, meaning
they only allow information to be transfered between different parts of the tree
when the positions are identical. I wanted something more fuzzy (another reason
I wanted this is because I believe that humans are pretty good at this fuzzy
matching - they will not search the same line on the board if some pawn on the
side had shifted without making much of a difference).

I took a very standard AI approach - I a made a feature vector that describes
the board. The idea is that "similar" vectors will represent similar positions
(positions with similar essential features). I knew this is a very tall order,
but I believed that anything is better than history heuristic's total
context-independence (we are not just making moves in outer space here!). So I
just picked a feature vector that made some sense, after trying a couple others
(any better suggestions are very welcome)

The vector was:
[First 64 elements - 1 if piece is present on that square, 0 if not]
[Next 16x16 elements - 1 if piece A is attacking piece B, 0 if not (attack
table)]

There are a ton of different possible feature vectors, and I have no idea which
is best. I simply picked this one because of its utter simplicity. Obviously, it
would be nice to add elements like King Safety, etc. to this, but this was not
the focus of the research which is centered around the approach.

Now that I have feature vectors this opens up all kinds of AI learning tools
that just love vectors. I settled on using one of the simplest ones (and one
that my advisor loves) - perceptrons.
http://www.cs.princeton.edu/~schapire/uncompress-papers.cgi/FreundSc98.ps

I won't get into details here, but the idea is simply to maintain some weight
vectors for each possible move and adjust these weights by the feature vector
every time a move is picked incorrectly. The move with the biggest dot product
with the feature vector is the winner in a position. This is an online learning
algorithm. This produced decent, albeit slightly disappointing results - on some
of the Bratko Kopec positions, it reduced the node counts, but by 5% LESS than
the history heuristic. There are many possible reasons, such as the stupidity of
the vector and the stupidity of perceptrons, but I am very much convinced that
this context oriented knowledge transfer is a good idea. What do you think?

An approach that has actually worked is using a similar but less ambitious
technique - where instead of only indexing by TO, FROM squares like the history
heuristic does, I index by the WHICH_PIECE_IS_MOVING and also by the bit vector
of the types of pieces its attacking. In other words, instead of just saying

history[from][to] += 1 << depth;

I say something like

history[piece][from][to][attack_bit_vector] += (1 << depth);

This actually works about 3% better than the history heuristic in terms of node
counts on the same Bratko Kopec problems, which I think supports my conclusion
that additional context would definitely help the history heuristic.

(Disclaimer: all results are preliminary)

Thanks again! More comments are definitely appreciated.

Artem



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.