Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Ralph Stoesser

Date: 14:27:06 07/01/03

Go up one level in this thread


On July 01, 2003 at 17:08:29, Marc van Hal wrote:

>On July 01, 2003 at 16:17:37, Magoo wrote:
>
>>On July 01, 2003 at 16:02:14, Albert Bertilsson wrote:
>>
>>>On July 01, 2003 at 15:55:07, Anthony Cozzie wrote:
>>>
>>>>On July 01, 2003 at 15:42:42, Albert Bertilsson wrote:
>>>>
>>>>>>Yes, but things are different with chess. In backgammon, you don't need to do
>>>>>>deep searches. Backgammon is a randomized game, chess is not. There have been
>>>>>>attempts, but not that succesful, i have looked at KnightCap, which uses
>>>>>>standard minimax with a ANN to evaluate the quiet positions.It has a rating of
>>>>>>about 2200 at FICS... pretty good, but no way near the top. I guess a program
>>>>>>with minimax only counting material would have a rating near that. Like they
>>>>>>say, chess is 99% Tactics. Nothing beats deeper searching.
>>>>>
>>>>>2200 on FICS with MiniMax counting material only?
>>>>>
>>>>>That is crazy!
>>>>>
>>>>>One of us is wrong, and hope it isn't me because I've spent many hours on my
>>>>>engine and it still is now way near 2200 in anything other than Lightning! If
>>>>>you're right I'm probably the worst chess programmer ever, or have missunderstod
>>>>>your message completely.
>>>>>
>>>>>/Regards Albert
>>>>
>>>>
>>>>Your engine, being new, still has a lot of bugs.  I'm not trying to insult you;
>>>>it took me a full year to get my transposition table right.   At least, I think
>>>>its right. Maybe.  Anyway, the point is that it takes quite a while to get a
>>>>good framework. I suspect on ICC a program with PST evaluation only could get
>>>>2200 blitz. (with material evaluation only it would play the opening horribly,
>>>>e.g. Nc3-b1-c3-b1-c3 oh darn I lose my queen sort of stuff)
>>>>
>>>>Anthony
>>>
>>>I agree that PST evaluation with Alpha-Beta and a transposition-table can play
>>>at least decent chess, but that's quite many powerful improvements over MiniMax
>>>with Material only.
>>>
>>>/Regards Albert
>>
>>I said near, and when i say minimax, i really mean alphabeta (no one uses a
>>straightforward minimax). When my engine was "born" (minimardi) it had only
>>material evaluation, searching 4 ply, it could play a decent game. Rated around
>>1700 blitz at FICS. Now, consider searching around 8 ply, i think a rating >2000
>>is not hard to imagine. My point was that in chess, the most important thing to
>>accuretly evaluate positions is a deep search. No matter what methods you use,
>>if you search deep your program will play decent. This is one of the reasons why
>>ANN have worked so well in backgammon and not in chess.
>
>Can't neural networks look deep ?
>Why is that?
>And do neural networks learn or not?
>
>Marc

No to the first question in any case and no to the second question in respect of
Snowie backgammon.
NN backgammon programs like Snowie are looking max. 3 ply ahead and evaluating
the 'MiniMaxed' positions with a pre-trained NN. They do not learn anymore while
playing, but it would be also possible to do so.



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.