Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Marc van Hal

Date: 10:28:47 07/02/03

Go up one level in this thread


On July 02, 2003 at 11:57:36, Ralph Stoesser wrote:

>On July 02, 2003 at 10:46:03, Marc van Hal wrote:
>
>>On July 01, 2003 at 17:27:06, Ralph Stoesser wrote:
>>
>>>On July 01, 2003 at 17:08:29, Marc van Hal wrote:
>>>
>>>>On July 01, 2003 at 16:17:37, Magoo wrote:
>>>>
>>>>>On July 01, 2003 at 16:02:14, Albert Bertilsson wrote:
>>>>>
>>>>>>On July 01, 2003 at 15:55:07, Anthony Cozzie wrote:
>>>>>>
>>>>>>>On July 01, 2003 at 15:42:42, Albert Bertilsson wrote:
>>>>>>>
>>>>>>>>>Yes, but things are different with chess. In backgammon, you don't need to do
>>>>>>>>>deep searches. Backgammon is a randomized game, chess is not. There have been
>>>>>>>>>attempts, but not that succesful, i have looked at KnightCap, which uses
>>>>>>>>>standard minimax with a ANN to evaluate the quiet positions.It has a rating of
>>>>>>>>>about 2200 at FICS... pretty good, but no way near the top. I guess a program
>>>>>>>>>with minimax only counting material would have a rating near that. Like they
>>>>>>>>>say, chess is 99% Tactics. Nothing beats deeper searching.
>>>>>>>>
>>>>>>>>2200 on FICS with MiniMax counting material only?
>>>>>>>>
>>>>>>>>That is crazy!
>>>>>>>>
>>>>>>>>One of us is wrong, and hope it isn't me because I've spent many hours on my
>>>>>>>>engine and it still is now way near 2200 in anything other than Lightning! If
>>>>>>>>you're right I'm probably the worst chess programmer ever, or have missunderstod
>>>>>>>>your message completely.
>>>>>>>>
>>>>>>>>/Regards Albert
>>>>>>>
>>>>>>>
>>>>>>>Your engine, being new, still has a lot of bugs.  I'm not trying to insult you;
>>>>>>>it took me a full year to get my transposition table right.   At least, I think
>>>>>>>its right. Maybe.  Anyway, the point is that it takes quite a while to get a
>>>>>>>good framework. I suspect on ICC a program with PST evaluation only could get
>>>>>>>2200 blitz. (with material evaluation only it would play the opening horribly,
>>>>>>>e.g. Nc3-b1-c3-b1-c3 oh darn I lose my queen sort of stuff)
>>>>>>>
>>>>>>>Anthony
>>>>>>
>>>>>>I agree that PST evaluation with Alpha-Beta and a transposition-table can play
>>>>>>at least decent chess, but that's quite many powerful improvements over MiniMax
>>>>>>with Material only.
>>>>>>
>>>>>>/Regards Albert
>>>>>
>>>>>I said near, and when i say minimax, i really mean alphabeta (no one uses a
>>>>>straightforward minimax). When my engine was "born" (minimardi) it had only
>>>>>material evaluation, searching 4 ply, it could play a decent game. Rated around
>>>>>1700 blitz at FICS. Now, consider searching around 8 ply, i think a rating >2000
>>>>>is not hard to imagine. My point was that in chess, the most important thing to
>>>>>accuretly evaluate positions is a deep search. No matter what methods you use,
>>>>>if you search deep your program will play decent. This is one of the reasons why
>>>>>ANN have worked so well in backgammon and not in chess.
>>>>
>>>>Can't neural networks look deep ?
>>>>Why is that?
>>>>And do neural networks learn or not?
>>>>
>>>>Marc
>>>
>>>No to the first question in any case and no to the second question in respect of
>>>Snowie backgammon.
>>>NN backgammon programs like Snowie are looking max. 3 ply ahead and evaluating
>>>the 'MiniMaxed' positions with a pre-trained NN. They do not learn anymore while
>>>playing, but it would be also possible to do so.
>>
>>What is neural networks if it does not learn by it self?
>>(a bugy program?)
>>And again: Why can't it look deep?
>>
>>I think real A.I. can not have these problems.
>>
>>Marc
>
>
>The NN does learn by itself, that's the idea of a NN! But in case of Snowie (and
>Jellyfish and GnuBackgammon) the NN had learned before. It has been trained
>enough (hopefully) by the developers of the program.
>
>A NN cannot look deep, but it can learn an evaluation function.
>
>In such games as chess, othello, checkers and so on you have a deep looking part
>(alpha-beta MiniMax) and an evaluation part. A NN can learn the evaluation
>function, the deep looking part is apart from it.
>
>Ralph


If a NN learns you should not block it's learning option
For instance I give it all the chess knowledge humans gathered
It only can beat us when it starts to know more then us.
By which it has learned in practice.
And then you have the parameters you gave it
Who says the NN likes these Parameters
And rather would like to play an other style
Like Botvinik was a good teacher
Because he did learn Kasparov his knowledge .
But never forced him to play in his style.

The first knowledge you should look at closeley
Is the knowledge of mobilety.
Not to mix with space.
Though space can increase the mobilety
and decrease the mobilety of the oponent
It always has to count the breakthroughs of the oponent
After which all of the sudden all pieces are weak against attacks.
Because they can't suport each other.
So space is good aslong when it is suported with enough presure.
Otherwise it can be bad.
Or not suficient.
But you also have to keep an eye on the mobilety of the pawn chain.
Meaning can they still open files?
This is the kind of knowledge which you could teach a NN wich you can't learn
easely to a program which is based on search.

Maybe when it has this knowledge and enough knowledge about king safety
And pawn structures
it already might play well without parameters
But still it would help a lot more when it could search deeper then 3 plys
But I still don't understand why it only can look 3 plys ahead.
is there a restriction in the code?


Marc
Marc



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.