Author: Amir Ban
Date: 02:15:06 02/26/98
Go up one level in this thread
On February 25, 1998 at 15:19:50, Don Dailey wrote: >On February 25, 1998 at 13:06:36, Amir Ban wrote: > >>On February 25, 1998 at 11:29:06, Don Dailey wrote: > >>>In general, I believe many positional terms should change >>>in value when material is not close to zero. Another way >>>of viewing this is to say "don't be as eager to hunt pawns >>>if you are already have extra material." It's the same >>>concept. >>> >>> >>>- Don >> >> >>I've also done some work on this. It seems that the most natural >>probability mapping is: 1 / (1 + exp(-x/c)), where x is your eval and c >>a suitable positive constant for scaling. It meets the necessary >>boundary conditions and symmetry requirements. >> >>I don't think I agree with your statement on needing to change >>positional terms according to the base score. > I'm just answering your paragraph which I left undeleted above. >I'm not clear what you mean. Do you mean giving a term more weight >for one side if that side is down? If so, then I admit it's just a >guess, but I feel like it might help. > >> Actually with this >>function it makes perfect sense for them to be simple additives. If you >>see how it behaves, you will see that a small fixed increment will >>change 50% to 60%, but 99% to only 99.2% and 0.8% to 1% (just offhand >>examples). I.e. they don't affect the expected outcome seriously unless >>it's reasonably even. > >I think one us is missing the point (maybe me.) If you are a pawn >down, it might seem like getting a passed pawn will be worth more but >in fact nothing is really happening. There is equal scaling for all >terms and I beleive the program will play identically if we replace >the evaluation with f(eval) where f is your probability mapping >function and eval is the current static evaluator. Any two positions >using either scoring function will compare the same way. > As my example shows, the exponential mapping means that an additive term would change the expectation most when the position is even, and very little if it is very uneven. This is in accordance with reality so the mapping makes sense, and adding positional terms as constants makes sense. >But now a good question is: What does the probability measure? Is it >the probability that the computer will win? I think the correct >interpretation should be "the probability that the position is a won >position." A dead draw should be considered as 50% of a win, or 50% >probability of winning. If we say it's the probability that the >computer will win, then it's completely ambiguous, because we do not >know what assumptions to make about the strength of the opponent! If >Cilkchess is down half a pawn against most humans then it's chances of >winning are still greater than 50%. Another possible interpretation >is the probability of winning against an equal opponent (whatever that >is!) > I thought this question would come up. I don't think there is any problem in defining the probability. For a single position, talking about probability is meaningless (that's usually the case with probabilities), but for many positions, it does. Here's a precise though theoretical definition: Take all the positions that you evaluate as x, lookup their game-theoretic value in your 32-piece tablebase, and average. This is the true outcome expectation (or probability) of your x-evaluation, and it should be compared to what your probability mapping says for x, that is, what you thought an x score means. E.g. you may think a value of +4 means you score 98%, but looking at all positions that you value +4, you find that they really score only 93%. Practically, this can be approximated by scanning game databases. Amir
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.