Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Vincent Diepeveen

Date: 06:16:14 07/07/03

Go up one level in this thread


On July 07, 2003 at 07:20:18, Sune Fischer wrote:

>On July 07, 2003 at 06:03:08, Vincent Diepeveen wrote:
>
>>>One would first have to consider the "theorical" aspect. How big would the net
>>>have to be, how accurately do we want to be able to evaluate and so on.
>>
>>Very good question. I can give a very good answer. For the hard positions
>>accuracy has to be within 0.1 pawns accurate for all the summation of all the
>>involved patterns. We know from experience that many good moves get missed
>>because of recently a Bg5 move posted here diep evaluated it statically after a
>>shallow search white up 0.035 for the wrong move. the winning move black up
>>0.001
>>
>>So we see differences of up to 0.034 for very difficult moves to find.
>
>If the differences are going to be that small, then I think it either doesn't
>matter much because the differences really are that small, or it is best left to
>the search to find a deeper cause.
>
>IMO the main point of using ANN instead of handcoding everything is not to gain
>accuracy, but to take the load off the programmer. I.e. you code the basic
>setup, then let it run at a super computer for a few weeks and you have a world
>champion.

Remember that position where diep is 0.035 off from evaluation?

A NN generated evaluation will be until the year 2020, for sure be more than
0.350 off, assuming this position is not in the training set of course.

>In some cases it might be possible for the ANN to do better than handcoding,
>because the programmer didn't think of that pattern or haven't had time to write
>it, or really has no idea how much to score it.
>
>>Note the singular extension version of diep finds it at 10 ply already (42
>>seconds). I'm talking about the engine version without SE.
>>
>>2rq3r/5pk1/pn1b1np1/1p1pN3/2pP3p/1P3QN1/P1P1RPPP/2B1R1K1 w - - 0 1 Bg5!! very ha
>>rd. Varnusz-Pogats, Budapest 1979
>>
>>>Then the more practical stuff, how do we get a cost function - cheaply.
>>>What kind of training algorithm should we apply (probably the fastest most
>>>aggressive kind), how do we speed up the network evaluation during runtime etc.
>>>
>>>Chess in not the most natural thing to use it on IMO, because chess is so
>>>concrete, tiny differences can make a big difference.
>>
>>This is not the biggest problem IMHO.
>>
>>If you tune X positions at different scores Y within 0.001 pawns, then ANNs can
>>do it for you, as long as you let them recognize whether a position is in X.
>>
>>the problem in chess is that it has to evaluate positions X' which are not in
>>the set X. This is IMHO the *fundamental* problem why ANNs won't work for chess.
>
>That's not really a problem, that's how they work.
>You train the network on "the training set", then you test it on a "test set".
>Of course you know that it will do good on the training set given enough
>training and network size, that's a mathematical fact. The real interesting part
>is to see how it performs on the test set, to see if it has learned to
>generalize.

So far ANNs only excelled in doing well when the training set is the testset.
Like that voice recognition.

Note that even that they do primitive compared to human ears, but of course they
can do it automatically which is huge advantage.

>>Additional to that i doubt seriously whether they work anyway for several AI
>>applications where they claim being used successful having in mind that the
>>average AI application is *not* recognizing a fixed radar shape and *not*
>>recognizing a fixed voice.
>
>No reason to doubt that, trust me :)
>
>-S.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.