Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Rating Points and Evaluation Function

Author: Robert Hyatt

Date: 13:26:04 05/21/02

Go up one level in this thread


On May 20, 2002 at 14:47:24, Eric Baum wrote:

>
>OK then:
>(1) How much have computer programs benefitted from additional
>features? Remove all additional features from the top programs
>except material/piece-square table, and how many rating points would you lose?
>I'm guessing less than 100, but do you have another estimate?

No idea.  For Crafty, all improvements over the last 3+ years have
been _exclusively_ in the evaluation.  I haven't changed the search
at all...


>
>(2) Are there any programs with significant ability to discover new
>features, or are essentially all the features programmed in by hand.
>If you believe there are programs that discover useful new features,
>how many rating points do you think they have gained?
>And can you give me some idea of what type of algorithm was used?

You are talking about "learning" as humans do it (discover new features).
I don't know of _any_ program that does this.  Some use pre-defined features,
but twaddle with the weights associated with them.  But that is very crude
in comparison to human learning.



>
>Also, for comparison, does anybody have a recent estimate of rating
>point gain per additional ply of search?

50-70 seems to be current value...  has been for years too...


>
>(3) Also, am I right in thinking that modern programs are still more or
>less doing alpha-beta with quiessence search, or has there been real
>progress on context dependent
>forward pruning, leading to substantial rating points gains?

There is some forward pruning going on.  From null-move that many use,
to real forward pruning as defined by Shannon 50 years ago...

>
>
>On May 20, 2002 at 12:54:48, Robert Hyatt wrote:
>
>>On May 20, 2002 at 08:23:35, Eric Baum wrote:
>>
>>>How much do modern programs benefit from
>>>developments beyond alpha-beta search +quiesence
>>>search? So, if you did the same depth search,
>>>same quiesence search, same opening book,
>>>same endgame tables, but replaced the evaluation
>>>function with something primitive-- say material
>>>and not much else-- how many rating points would you
>>>lose?
>>>
>>>My recollection is that one of the Deep Thought thesis
>>>showed a minimal gain for Deep Thought from
>>>extensive training of evaluation function--
>>>it gained some tens of rating points, but
>>>less than it would have gained
>>>from a ply of additional search. Has that changed?
>>
>>
>>You are mixing apples and oranges:
>>
>>apples:  which evaluation features does your program recognize?
>>
>>oranges:  what is the _weight_ you assign for each feature you recognize?
>>
>>Those are two different things.  The deep thought paper addressed only the
>>oranges issue.  They had a reasonable set of features, and they set about
>>trying to find the optimal value for each feature to produce the best play.
>>
>>Adding _new_ evaluation features would be a completely different thing,
>>of course...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.