Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: 3 Faxtors for Position Eval

Author: Graham Laight

Date: 05:56:43 01/13/99

Go up one level in this thread



On January 12, 1999 at 12:57:18, José de Jesús García Ruvalcaba wrote:

>On January 11, 1999 at 20:58:11, Graham Laight wrote:
>
>>On January 11, 1999 at 13:57:31, José de Jesús García Ruvalcaba wrote:
>>
>>>On January 09, 1999 at 05:55:25, Graham Laight wrote:
>>>
>>>>As I was sitting eating my breakfast just now, it occured to me that there are
>>>>basically 3 items that, between them, will influence how close an evaluation of
>>>>a chess position is to how good that position really is:
>>>>
>>>>1. The number of pieces of knowledge the evaluation function can call upon
>>>>
>>>>2. The quality of those pieces of knowledge
>>>>
>>>>3. The accuracy of selecting the right pieces of knowledge (and their
>>>>appropriate weightings) for the position at hand
>>>>
>>>>
>>>>Does anybody have any thoughts about this?
>>>
>>>I think that different evaluation functions are not comparable by themselves.
>>
>>Why not?
>>
>>You take a chess position, and run 2 different evaluation functions against it.
>>
>>The one that more accurately scores the position is the better evaluation
>>function.
>>
>
>Now the problem is, how to measure this accuracy?
>There are only three posible theoretical values for a chess position (white
>wins, draw or black wins), and it is unknown for most positions. An evaluation
>function would be theoretically accurate if it gives every white win a better
>score than any draw and every draw a better score than a black win, but I can
>not imagine a way to find out other than solving the game of chess.
>
>Also, for a moment let us assume that the contempt factor is zero. If you take
>an evaluation function and multiply it for *any* positive number, you get
>different evaluation function which will *always* lead to the same best move!
>Which one is more accurate?

I agree with the sentiment - it is difficult to get perfect evals for most
positions.

This always seems to be a problem when I try to build expert systems - you don't
know what knowledge will be most useful until you have built the system - but
you can't build the system without knowing what knowledge will be most useful.

>>>Overall program strength is. I mean, you can compare two evaluation functions
>>>once you have all the other components of the programs fixed; but with a
>>>different set of other components you can get different results.
>>>Among the "other components" I can see:
>>>1. Hardware: processor speed, and amount of memory used for hash tables.
>>>2. The search algorithm, including extensions.
>>>3. The opening book.
>>>4. Endgame tablebases.
>>>5. The time control.
>>
>>This is like saying, "You cannot evaluate the engine in a car unless you take
>>into consideration the door handles and the headlights".
>>
>>I wanted to discuss the evaluation function of a program on its own - not the
>>other stuff - important though I agree it is.
>>
>
>Your original statement is essentially correct. I did not mean to disagree (in
>fact I agree). My point is that I can not see a way to measure the quality of an
>evaluation function by itself; but it is clear for me how to measure overall
>program strength.
>
>José.
>
>>Ah well - I have to admit that sometimes it's the door handles that sell the
>>car.
>>
>>Graham.
>>
>>>	I think that the correct "accuracy" of the weightings can dramatically change
>>>with these factors.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.