Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Vincent Diepeveen

Date: 18:22:40 07/06/03

Go up one level in this thread


On July 06, 2003 at 17:52:59, Sune Fischer wrote:

>On July 06, 2003 at 17:38:01, Vincent Diepeveen wrote:
>
>>If there was money to earn by programming a backgammon engine, i am sure some
>>guys who are good in forward pruning algorithms like Johan de Koning would win
>>every event there. It's like making a tictactoe program and then claiming that
>>an ANN is going to work.
>>
>>As we have a saying here: "In the land of the blind, one eyed is King".
>>
>>That's why i focus upon chess.
>>
>>In contradiction to you, i know how to do it with ANNs (just like many others
>>do), i just don't have 10^1000 system time to actually let the learning
>>algorithm finish ;)
>>
>>Any approximation in the meantime will be playing very lousy chess...
>>
>>Hell, with 10^1000 runs, even TD learning might be correctly finding the right
>>parameter optimization :)
>>
>>TD learning is randomly flipping a few parameters each time. It's pretty close
>>to GA's in that respect.
>
>There might be tricks to speed that up, certainly can't rule it out before it
>has been seriously attempted.

how do you plan to speed it up without the learning having domain knowledge on
chess?

it just has n parameters. Let's say 15000 parameters to not make it too hard for
it. It flips a few parameters. then it scores a bit better. then it flips a few
other parameters. it scores worse. And so on.

So it can at most do a logarithmic optimalisation for each parameter when it
would be brute forcing.

However it isn't brute forcing over each parameter but it tries to modify a
bunch of parameters at a time.

So you still are basically busy with a randomness with the 'luck' factor that it
will find the optimal parameterization for each parameter in log(2)(n).

that makes it on average (log n / log 2) ^ 15000

Now if you are lucky there are many optimizations which work well.

For example it might not matter too much whether a pawn on a2 is given 0.022 of
a pawn or 0.021 of a pawn.

So there is a reduction there. However the finetuning is a major problem. not a
single optimization algorithm is created having in mind a very good fine tuning.

So a very rude tuning it will find pretty soon, but that will work worse than a
hand tuning.

A better tuning than you can do by hand however will take that x ^ 15000 where x
is for sure > 2.

Best regards,
Vincent

>For example one could try and do some kind of "anti-aliasing" on the board, ie.
>smear the whole thing, to make the patterns more general. This would require
>only a fraction of the full network and make it faster to train.
>
>I'm sure the ideas will come if people try it, what you see as problem others
>may see as challenge :)
>
>-S.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.