Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: 3 FACTORS DETERMINE HOW GOOD A CHESS POSITION EVALUATION IS

Author: Graham Laight

Date: 04:51:45 01/12/99

Go up one level in this thread



On January 11, 1999 at 22:56:40, Dann Corbit wrote:

>On January 11, 1999 at 20:58:11, Graham Laight wrote:
>
>>On January 11, 1999 at 13:57:31, José de Jesús García Ruvalcaba wrote:
>>
>>>On January 09, 1999 at 05:55:25, Graham Laight wrote:
>>>
>>>>As I was sitting eating my breakfast just now, it occured to me that there are
>>>>basically 3 items that, between them, will influence how close an evaluation of
>>>>a chess position is to how good that position really is:
>>>>
>>>>1. The number of pieces of knowledge the evaluation function can call upon
>>>>
>>>>2. The quality of those pieces of knowledge
>>>>
>>>>3. The accuracy of selecting the right pieces of knowledge (and their
>>>>appropriate weightings) for the position at hand
>>>>
>>>>
>>>>Does anybody have any thoughts about this?
>>>
>>>I think that different evaluation functions are not comparable by themselves.
>>
>>Why not?
>>
>>You take a chess position, and run 2 different evaluation functions against it.
>>
>>The one that more accurately scores the position is the better evaluation
>>function.
>>
>>>Overall program strength is. I mean, you can compare two evaluation functions
>>>once you have all the other components of the programs fixed; but with a
>>>different set of other components you can get different results.
>>>Among the "other components" I can see:
>>>1. Hardware: processor speed, and amount of memory used for hash tables.
>The same function may perform very differently depending upon CPU, memory
>available, etc.

If the time an evaluation takes is not important (it might be useful to get the
evaluation right first, then worry about how to speed it up), then, apart from
the occasional arithmetic rounding difference (which would be very minor), the
same program will always produce the same results - regardless of memory or CPU
(assuming it doesn't crash!).

>>>2. The search algorithm, including extensions.
>The algorithm chosen will have O(f(n)) relevance.  So a given algorithm may
>perform better at short time controls but lose out at long time controls

This is true. But I was thinking only about making a good evaluation, regardless
of the time.

I agree there's a lot more work to be done to make a great chess playing
program.

>>>3. The opening book.
>The opening book may be *used* by the evaluation function, especially if it
>contains more data than just the position.  Examples:  What is the frequency of
>win/loss/draw for this position by players of ELO >= x?

Agreed.

In my scenario, this would count as "pieces of knowledge"

>>>4. Endgame tablebases.
>These can *definitely* be an integrated part of the evaluation function.  If
>they are not, then they probably should be.

Agreed.

In my scenario, this would count as "pieces of knowledge"

>>>5. The time control.
>This comes into relevance if you are talking about a particular algorithm.  See
>remarks on (2) above.

Agreed. Again, I believe that the evaluation knowledge management principles
should be sorted out first, and the performance issues second.

>>This is like saying, "You cannot evaluate the engine in a car unless you take
>>into consideration the door handles and the headlights".
>>
>>I wanted to discuss the evaluation function of a program on its own - not the
>>other stuff - important though I agree it is.
>I think that all of the above can (indeed) be an integral part of the evaluation
>function.
>
>>Ah well - I have to admit that sometimes it's the door handles that sell the
>>car.
>>
>>Graham.
>>
>>>	I think that the correct "accuracy" of the weightings can dramatically change
>>>with these factors.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.