Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Junior-Crafty hardware user experiment - 19th and final game

Author: Vasik Rajlich

Date: 00:08:34 12/27/03

Go up one level in this thread


On December 26, 2003 at 03:54:24, Uri Blass wrote:

>On December 26, 2003 at 03:24:50, Vasik Rajlich wrote:
>
>>On December 25, 2003 at 20:32:24, Christophe Theron wrote:
>>
>>>On December 25, 2003 at 14:35:53, Uri Blass wrote:
>>>
>>>>On December 25, 2003 at 14:04:47, Christophe Theron wrote:
>>>><snipped>
>>>>>By experience, no smart evaluation can compensate for the loss of one ply of
>>>>>search. In theory evaluation could compensate, but in practice I don't think
>>>>>anybody has ever managed to do it.
>>>>
>>>>I think that evaluation can compensate easily for loss of one ply of search and
>>>>not only in theory because it is easy to tell your evaluation to calculate the
>>>>result of 2 ply search.
>>>>
>>>>Evaluation by definition is a function that get a position and returns a number.
>>>>If the target is not to play better but to prove that evaluation can compensate
>>>>for 1 ply search then you simply tell your evaluation to perform 2 plies search.
>>>>
>>>>Note that some kind of search is already needed in smart evaluation even if you
>>>>do not make moves.
>>>>
>>>>For example you cannot detect trapped pieces in evaluation without checking that
>>>>every square that they can go is threatened by the opponent.
>>>>
>>>>You cannot detect forks in your evaluation without doing some kind of search.
>>>>
>>>>Uri
>>>
>>>
>>>
>>>Why did I *know* you would say that? :)
>>>
>>>In this case (positional evaluation doing a search), it's going to be a very
>>>expensive (computationally) evaluation. And what it does is... a search. So it
>>>just proves that nothing beats searching deeper...
>>>
>>>You can quibble on the definition of what positional is, of course. I have
>>>already stated myself several times that simple evaluation terms are able, given
>>>enough depth, to understand more complex concepts. So search is able to extract
>>>some positional information that is not explicitely described in the evaluation
>>>function, yes.
>>>
>>
>>This appears to be true for shallow searches. I've noticed that with very
>>shallow searches (like four or five ply) my program wants to play all sorts of
>>positionally weak moves which it doesn't want to play any more at seven or eight
>>ply. It seems that this also applies to extending deeper searches, according to
>>for example the SSDF results for different hardware. Clearly, a higher nps is
>>good.
>>
>>I am just wondering where exactly the line lies between maximizing speed and
>>adding knowledge. Considering the following hypothetical engines:
>>
>>engine A - spends 10% of its time in eval
>>engine B - spends 67% of its time in eval
>>
>>Engine A will have a 3x higher nps, so it will do slightly more than 1/2 ply
>>extra.
>
>No
>
>branching factor of top programs is near 3 so it is practically 1 ply extra.
>
>
> Engine B will be spending 20x longer evaluating each position. It seems
>>that engine A will gain around 50 rating points from its deeper searches (at
>>least at time controls long enough that engine B can also make it to 8 or 9
>>ply). It's hard to believe that a few accurate eval computations couldn't
>>compensate for this.
>
>The main question is what evaluation A includes.
>evaluation can be simple but not worthless.
>
>Junior use almost no time in its evaluation and it is one of the top programs.
>

True, an evaluation which is both accurate and fast is ideal.

It would be interesting to know the exact profile of the time spent in eval of
the top programs. As I rememember Amir said that Junior spends "less than 10% in
most positions". Does this mean a lot of lazy evals and some very big evals? Or
a truly small evaluation which pinpoints the most important features of the
position and ignores the rest.

In many cases, you arrive in eval needing only to show that the non-material
factors can't add up to more than X. (ie X = material score - alpha, or beta -
material score.) As X gets higher and higher, more and more things could be
ignored. Maybe some sort of a pinpointed lazy eval is the answer.

It seems to me that the areas where an engine can get the most benefit are:

A) Guiding search (ie reducing/extending): very very high
B) Quality of eval: very high
C) Overall speed, including speed of eval: medium/low, unless very big factors
are involved, such as building specialized hardware or getting a chance to run
on a big mp machine

On the other hand, "A" and "B" are quite vague, you never really know if what
you did was an improvement, while "C" is very easy to measure.

Vas

>The question is not if you can improve significantly piece square table
>evaluation but if you can improve significantly good cheap evaluation and I am
>sure that the difference between the evaluation of Junior and piece square table
>is clearly more than one ply.
>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.