Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Evaluation Function

Author: Kristo Miettinen

Date: 15:23:57 05/06/99

Go up one level in this thread


Hi Jose!

A different idea that I tried once was to vary my evaluation function parameters
within each iterative-deepening search to try to impose consistency from
iteration to iteration.

In other words, not training on grandmaster examples but rather striving to
"see" at depth N-1 what you found at depth N, for the specific position at hand.

The pseudocode goes something like this:

Begin with initial evaluation function from previous search or from a database.

Set search depth N = 2.

Label A: Conduct depth N search.

Remember results of depth N-1 search with same parameters (no computation).

Vary an evaluation parameter (I used an exhaustive sequence of parameters and
fixed small adjustments).

Conduct a depth N-1 search with the modified evaluation function.

If the modified evaluation agreed more closely with the depth N search (meaning
chose the same move and gave a closer value estimate) than the depth N-1 search
with the unmodified evaluation function, then accept the modification, check
time consumption, and if enough time remains return to Label A above (otherwise
exit and move).

Make the opposite variation to the evaluation parameter (if incrementing a
parameter made things worse, then maybe decrementing it will make things
better).

Conduct another depth N-1 search.

If the modified evaluation agreed more closely with the depth N search then
accept the modification, otherwise return to the original evaluation function.

If the original evaluation function has been retained after both attempted
modifications, increment N. Note that changing the evaluation function because
of the results of a depth N-1 search causes a new search at depth N, not at N+1.

Check time consumption, and if enough time remains return to Label A above,
otherwise exit and move.

I gave up on this approach when I gave up on iterative deepening and went
instead to an architecture with a connected graph of the entire search stored
all in memory, with the graph grown by selectively expanding one node at a time.

Sine cera,

-Kristo.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.