Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hashing pawn structures - how?

Author: Robert Hyatt

Date: 05:07:53 11/05/98

Go up one level in this thread


On November 05, 1998 at 01:06:17, Don Dailey wrote:

>On November 04, 1998 at 22:59:26, Robert Hyatt wrote:
>
>>>>You *must* scale... you can't just suddenly decide to turn something off or
>>>>on, because you introduce a discontinuity in your evaluation.
>>>>I have a few
>>>>of these right now because I have been testing some new eval code, and as
>>>>expected, right around the discontinuity I saw some gross problems, like
>>>>sacrificing a pawn just to "jump over the gap" and turn something off.  Once
>>>>I was sure I wanted to turn something off or on, I then used a "Scale()"
>>>>function I have to gradually turn it on or off so this doesn't happen.
>
>>nothing wrong with using opponents pieces for king safety.. that's exactly what
>>I do.  What is wrong is setting a threshold and just cutting king safety off
>>when pieces drop below some level... because  you may well sac a pawn to trade
>>a knight that gets you under that "level" so that king safety kicks out...  but
>>you lose for doing so.  I use a simple "ScaleByMaterial()" macro to make the
>>king safety proportional to remainint material...  as pieces go down, so does
>>the importance of king safety...  but it is linear from all pieces on the board
>>all the way down to zero...  no discontinuities...  Ditto for other things like
>>the value of a passed (or outside passed) pawn...
>
>Hi Bob,
>
>What you are really talking about is just having incorrect evaluation,
>I'm not sure scaling has that much to do with it.  I completely agree
>that your examples are correct, but most features should turn on or
>off or be gradual only according to their relevance to the position.
>
>This however is not a contradiction of what you said, just another way
>of looking at it.
>
>A simple example of this is backward pawns on open files.  This should
>not get a very big penalty if rooks or queens do not exist for the
>opponent.  This term SHOULD suddenly kick in (or out) when the last rook
>and queen disappears.


I disagree here.  I think it ought to gradually phase out as each rook comes
off the board.  Because the penalty ought to be pretty large since such a pawn
is a good target...  and chopping the whole thing off after removing a single
piece can lead to gross mistakes...

In my case, I simply count these pawns, when evaluating pawns, and then fold
the score in on a rook-by-rook basis...  which means 1/2 the penalty goes away
when one rook comes off, the rest of it goes away when the other rook goes
off, leaving only the "weak pawn" score left...

Sometimes you have to be pretty "granular"... ie queens on/off when a king is
exposed, because there is only one queen...


>There is no scaling algorithm (or principle)
>that makes sense here, the only thing important is it's relevance to the
>position.  If there was a way to figure out that the last rook was
>unlikely to attack the pawn, then in this case some kind of scaling
>would be appropriate, but only because we know more about the position
>and can be more accurate.  Perhaps a very minor scaling could take
>place based on how many enemy major pieces are present, but in principle
>a single one is almost as bad as 2 or 3.  It certainly wouldn't be
>appropriate to scale them in linear fashion.
>


doesn't have to be linear of course, as my king safety certainly isn't...
but in the case of a weak pawn on an open file, the only reason the open
file factors in is because of rooks.  I'm not sure why it should stay around
in full effect when only one rook is left...


>In your king safety example, scaling is a big improvement because
>you improved the accuracy of the score in those positions where less
>material is on the board.  Before "scaling" the eval was only correct
>in the case it was most adjusted to.
>
>So what you are saying about discontinuity sort of camoflauges the real
>issue (in my opinion) which is simply having the most accurate score
>in as many situations as possible.  With certain terms some type of
>scaling like we use in our programs accomplishes this.
>
>Of course you are also right about the discontinuities.  I'm just
>being nitpicky about this stuff.  I have seen the same phenomenon
>you describe many times.  Before I had endgame databases we had
>some perfect endgame knowledge but it only identified a subset
>of positions with perfect scores.  We got strange results where
>we would solve a problem and on the next iteration UNSOLVE it
>because the extra ply enabled us to find a position we couldn't
>evaluate correctly!  That's the real issue, simply evaluating
>as many positions as correctly as you can and this will sometimes
>involve "scaling."
>
>Any time you can fill in the gaps between 2 positions, it is a good
>thing.  In my backward pawn example, having a major piece is good
>against the opponents backward pawn on an open file, but it is
>even better if your rook is ON that file.  You could be even more
>gradual by saying that having access to the file the pawn is on deserves
>a little credit, more gradualism.  But all of this has to do with
>evaluating a few more positions a little better, not really
>gradualism.  It's semantics, 6 of one or half dozen.
>
>
>- Don


I handle the weak pawn problem differently.  I penalize it in three ways:

(1) being weak in general pawn structure eval;

(2) factoring in rooks (if present) later in the eval

(3) rooks on such a file get the same "open file" bonus they get on a real
open file...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.