Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Russell Reagan

Date: 16:43:43 07/01/03

Go up one level in this thread


On July 01, 2003 at 16:17:37, Magoo wrote:

>I said near, and when i say minimax, i really mean alphabeta (no one uses a
>straightforward minimax).

But you said minimax, and there was nothing else that you said to indicate that
you meant alpha-beta.


>When my engine was "born" (minimardi) it had only
>material evaluation, searching 4 ply, it could play a decent game. Rated around
>1700 blitz at FICS. Now, consider searching around 8 ply, i think a rating >2000
>is not hard to imagine.

300 ELO points is a lot, so I don't think it's valid to say, "I achieved 1700,
so we can assume 2000."

At one point my program had material only and alpha-beta and I could easiliy
beat it almost every game, and I am nowhere close to 2000, or even 1700. All you
have to do is focus on not getting nailed by a simple combination, hold on until
the endgame, and the stupid program can't come close to seeing passed pawns
until it's way too late, and you win. I even had material+mobility and
alpha-beta with qsearch, and it was still a pushover. But YMMV.

I agree that it is possible to create a strong program using a simple evaluation
function, but you need to make use of a lot of the other enhancements such as
transposition table, good move ordering, forward pruning, extentions/reductions,
etc.



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.