Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Knowledge again, but what is it?

Author: Don Dailey

Date: 12:19:50 02/25/98

Go up one level in this thread


On February 25, 1998 at 13:06:36, Amir Ban wrote:

>On February 25, 1998 at 11:29:06, Don Dailey wrote:
>
>>>I have wondered why programs evaluations are measured in pawns intead of
>>>probablilities of winning.  Has no one done this?  Has anyone ever taken
>>>the evaluations in Informant as probabilities of winning and regressed
>>>them against explanatory variables such as material, space, etc. to fit
>>>this function.
>>>
>>>George Essig
>>
>>I'm actually working on this!   This is how I think of
>>evaluation and it would be natural to convert the program
>>to this system.  However I'm not sure it's any more useful
>>than simply finding the right function to convert a score
>>to a probability.   But most of it's usefulnes is just
>>thinking in these terms (whether you actually implement
>>it or not.)  For instance, I believe having an advanced
>>passed pawn should not affect your probability of winning
>>too much if you are already a piece up,  but should have
>>more impact on the score if you are down a piece.  A simple
>>linear bonus for this passed pawn might not be quite right.
>>
>>In general, I believe many positional terms should change
>>in value when material is not close to zero.  Another way
>>of viewing this is to say "don't be as eager to hunt pawns
>>if you are already have extra material."   It's the same
>>concept.
>>
>>
>>- Don
>
>
>I've also done some work on this. It seems that the most natural
>probability mapping is: 1 / (1 + exp(-x/c)), where x is your eval and c
>a suitable positive constant for scaling. It meets the necessary
>boundary conditions and symmetry requirements.
>
>I don't think I agree with your statement on needing to change
>positional terms according to the base score.

I'm not clear  what you mean.   Do you mean giving  a term more weight
for  one side if that side  is down?  If so,  then I admit it's just a
guess, but I feel like it might help.

>                                              Actually with this
>function it makes perfect sense for them to be simple additives. If you
>see how it behaves, you will see that a small fixed increment will
>change 50% to 60%, but 99% to only 99.2% and 0.8% to 1% (just offhand
>examples). I.e. they don't affect the expected outcome seriously unless
>it's reasonably even.

I think  one us is missing  the point (maybe me.)   If  you are a pawn
down, it might seem like getting a passed pawn  will be worth more but
in  fact nothing is really happening.   There is equal scaling for all
terms and  I beleive the program  will play identically if  we replace
the  evaluation with f(eval)   where   f is your  probability  mapping
function and eval is the current static  evaluator.  Any two positions
using either scoring function will compare the same way.

But I like the mapping function.  It would be simple to fit it to your
program and  then display the programs  assessment  of it's chances in
probabilistic terms on the display!   Cilkchess (or Junior) thinks  it
has an 80% chance of winning!

But  other interesting things  can be  done with  this.  Does a  1 ply
search that returns a  score of "half a  pawn" offer the same  winning
chances as a 10 ply search that returns  the same score?  I argue that
the 1 ply search is   much more likely  to be  in error and  therefore
perhaps the true chances of winning are closer to zero than the 10 ply
searches estimate.  However you might also argue that the 1 ply search
could be in error in either direction!  Maybe we  are more than half a
pawn up!  But I  believe that the  less  reliable the  evaluation, the
more likely the  true score is in the  zero (draw) direction.  I don't
know if  this is correct or not,  but again  is  just my  guess.  This
could probably be tested with a few hundred auto-test games.

But now a good question is: What does  the probability measure?  Is it
the  probability that  the computer   will win?   I think the  correct
interpretation should be  "the probability that the  position is a won
position."  A dead draw should  be considered as 50% of  a win, or 50%
probability  of  winning.  If we   say  it's the probability that  the
computer will win, then it's  completely ambiguous, because we do  not
know what assumptions to make about the strength of  the opponent!  If
Cilkchess is down half a pawn against most humans then it's chances of
winning are still  greater than 50%.  Another possible  interpretation
is the probability of winning against an equal opponent (whatever that
is!)

More than anything though,   your mapping function and  thinking about
the evaluation function  probabilistically is probably a more accurate
way  to  think  about  chess,  although it   may  simply  boil down to
semantics.  In my  own games, when I'm  losing I try  to find ways  to
complicate and mix  up the position because I  feel like it gives me a
chance of recovering.  I think  I evaluate differently.  Also I'm less
likely to grab a pawn if I'm already a piece up and think it gives him
even a slight chance.   But I'll grab it for  sure if material is even
and I  think he has only a  slight chance of  comming out  better.  Is
this correct?  I don't know for sure.


>Looking at it another way: There must be some mapping where simple
>addition of terms makes sense. Since this is what you currently do, then
>after so many years of tuning you can expect that your evaluation would
>be a close approximation of it.
>
>Amir



This page took 0.03 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.