Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Opening, Middlegame, Endgame (for authors)

Author: Tom Likens

Date: 09:05:37 06/17/04

Go up one level in this thread


On June 17, 2004 at 11:40:58, Fabien Letouzey wrote:

>On June 17, 2004 at 11:15:40, Tom Likens wrote:
>
>>On June 17, 2004 at 05:09:36, Fabien Letouzey wrote:
>>>Fruit does exactly that, in an attempt to reduce the blemish effect described by
>>>Hans Berliner.
>>>
>>>That means two scores are associated with each evaluation feature.  The
>>>"computational expense" can be reduced to near 0 if you feel like it (mixing two
>>>16-bit values into a single 32-bit integer).
>>>
>>>Maybe the main drawback is that features are evaluated regardless of the
>>>position.  Of course you can make exceptions if you like, but that somewhat
>>>defeats the original goal.
>>>
>>>Fabien.
>
>>Hello Fabien,
>
>>A couple of questions and perhaps a clarification.  What do you use for the
>>x-axis variable of interpolation (I'm guessing the value of material left for
>>each side, excluding pawns)?  Also do you scale the values for black and
>>white independent of each other or do you simply blend the final value?
>
>Single interpolation.  I use the sum of all material (both sides, 0/3/3/5/9 as
>you mentionned).
>
>>My naive assumption about how this would work is as follows:
>>
>>1. Compute all the common middlegame/endgame terms.  Computational
>>    cost zero (they have to be calculated regardless).
>>
>>2. Calculate the middlegame-only terms *and* calculate the endgame-only
>>    terms. Computational cost not zero, since some terms will be calculated
>>    which may not be needed.
>>
>>3. Interpolate based on the material (as I mentioned in my previous post,
>>   some of the terms would be scaled based on the opponent's material etc.)
>>
>
>This is exactly what I do.  Bear in mind that I have almost nothing in the
>evaluation so I have no such thing as a middlegame-only or endgame-only term.  I
>will most likely reconsider when adding king safety and other features.
>
>>I believe this would go a long way towards making the evaluation function
>>"continous" at the break between the middlegame and endgame.  I've held
>>off on doing this since I was afraid the computational costs would outweigh
>>the benefit, but perhaps I should revisit it.
>
>Maybe I don't understand what you do (as described in the first part of your
>post).  In my understanding the only difference is that you have several
>interpolation parameters.  In my understanding your evaluation is also
>"continuous".

I was probably a bit unclear.  Most of the program's evaluation terms are
scaled (i.e. blended), but some terms are not.  A simple example is the
king safety table.  In the middlegame the program gives a bonus for the
king being near the corner.  In the endgame the situation is reversed and
the bonus is for the king being near the "action" (more than likely the
center of the board).  This term currently isn't scaled but is added in
wholesale which causes a discontinuity at the boundary, similar to
Berliner's "blemish effect".  Interpolation of the overall score would fix
or at least mitigate this problem (and others like it).  You are correct
though, my evaluation has been moving towards being an overall
blended score.

>BTW, what is the additional "computational cost" of the single blending?

Again in the trivial example above, to interpolate the king position value you
would have to add both scores and take the average between them.
Very small cost, but not zero (probably, too small to really worry about
though).  Of course, this is a trivial example some eval terms might
(will) be expensive to calculate.  Comprehensive king safety seems
likely to be expensive.  Currently, Djinn spends a lot of time in the
middlegame on king safety and very little in the endgame.  Using a truly
blended eval would cause that cost to be accessed even in the endgame.

>>One other item, I'm not sure I understand your 16-bit vs. 32-bit comment.
>>Are you returning a 32-bit integer from all your evaluation subroutines which
>>is comprised of a 16-bit middlegame score and a 16-bit endgame?  This
>>makes some sense, because you would only have to interpolate once- but
>>how would you differentiate terms that needed to be scaled depending on
>>your material vs. your opponent's material, since this likely needs to be
>>handled on a term-by-term basis?
>
>I specifically quoted only the part where you mentionned "blending".  I assumed
>you were talking about a single interpolation.  Otherwise again, I don't
>understand the difference with what you are using.

Probably, my being very unclear again.  I was only surmising that you could
have a variable for tracking the score of the middlegame-only features and
another variable for tracking the endgame-only scores.  At the end of your
evaluation routine (before returning to the main search) you could perform
a single interpolation on the two scores rather than multiple interpolations
as each term was evaluated- saving a cycle or two, (but probably not worth
the effort, just something to think about).

>That beeing said, I think the "computational cost" of a few multiplications is
>negligeable in our programs.  I don't use the "trick" I mentionned.  It was only
>for the sake of argumentation.
>
>Fabien.

Agreed.  This would definitely *not* be the slowest part of my program!! :-)

--tom



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.