Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Sune Fischer

Date: 02:42:32 07/07/03

Go up one level in this thread


On July 06, 2003 at 21:22:40, Vincent Diepeveen wrote:

>On July 06, 2003 at 17:52:59, Sune Fischer wrote:
>
>>On July 06, 2003 at 17:38:01, Vincent Diepeveen wrote:
>>
>>>If there was money to earn by programming a backgammon engine, i am sure some
>>>guys who are good in forward pruning algorithms like Johan de Koning would win
>>>every event there. It's like making a tictactoe program and then claiming that
>>>an ANN is going to work.
>>>
>>>As we have a saying here: "In the land of the blind, one eyed is King".
>>>
>>>That's why i focus upon chess.
>>>
>>>In contradiction to you, i know how to do it with ANNs (just like many others
>>>do), i just don't have 10^1000 system time to actually let the learning
>>>algorithm finish ;)
>>>
>>>Any approximation in the meantime will be playing very lousy chess...
>>>
>>>Hell, with 10^1000 runs, even TD learning might be correctly finding the right
>>>parameter optimization :)
>>>
>>>TD learning is randomly flipping a few parameters each time. It's pretty close
>>>to GA's in that respect.
>>
>>There might be tricks to speed that up, certainly can't rule it out before it
>>has been seriously attempted.
>
>how do you plan to speed it up without the learning having domain knowledge on
>chess?
>
>it just has n parameters. Let's say 15000 parameters to not make it too hard for
>it. It flips a few parameters. then it scores a bit better. then it flips a few
>other parameters. it scores worse. And so on.

One would first have to consider the "theorical" aspect. How big would the net
have to be, how accurately do we want to be able to evaluate and so on.

Then the more practical stuff, how do we get a cost function - cheaply.
What kind of training algorithm should we apply (probably the fastest most
aggressive kind), how do we speed up the network evaluation during runtime etc.

Chess in not the most natural thing to use it on IMO, because chess is so
concrete, tiny differences can make a big difference.

>So it can at most do a logarithmic optimalisation for each parameter when it
>would be brute forcing.
>
>However it isn't brute forcing over each parameter but it tries to modify a
>bunch of parameters at a time.
>
>So you still are basically busy with a randomness with the 'luck' factor that it
>will find the optimal parameterization for each parameter in log(2)(n).
>
>that makes it on average (log n / log 2) ^ 15000
>
>Now if you are lucky there are many optimizations which work well.
>
>For example it might not matter too much whether a pawn on a2 is given 0.022 of
>a pawn or 0.021 of a pawn.
>
>So there is a reduction there. However the finetuning is a major problem. not a
>single optimization algorithm is created having in mind a very good fine tuning.

ANNs are not meant to be good at fine tuning.
They are designed to be good at "recognizing", even if the picture is noisy.
This is actually one of their strengths, though it sometimes seem like a
weakness.

So this is where I agree with you, chess is too concrete in many aspects, like
passed pawn races - one tempo off and you lose, it is doubtful if a ANN could
ever learn these things.

-S.

>So a very rude tuning it will find pretty soon, but that will work worse than a
>hand tuning.
>
>A better tuning than you can do by hand however will take that x ^ 15000 where x
>is for sure > 2.






This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.