Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: A New Self-Play Experiment -- Diminishing Returns Shown with 95% Conf.

Author: Andrew Dados

Date: 15:13:18 05/25/00

Go up one level in this thread


On May 25, 2000 at 16:16:40, Ernst A. Heinz wrote:

>Hi Andrew,
>
>>For program with unknown source there is always some 'doubt'.
>
>Even for programs with known source, there is "doubt" because
>we do not have a concise analytical model of their behaviour.
>Or do you pretend to understand all the subtle interactions
>going on within a chess program? I certainly do not ...
>
>Much more important than knowledge of the source code, IMO,
>is knowledge about the general design and playing capabilities
>of the program tested. We know both answers for "Fritz 6" --
>it is a sophisticated alpha-beta searcher around a fairly
>standard null-move design and one of the strongest chess
>programs available. "Fritz 6" performs extremely well in both
>comp-comp (e.g. see SSDF) and comp-human games (e.g. see Dutch
>Championship).
>
>>Lets assume its eval produces a very limited set of scores (which is true for
>>root processors, but also for programs with full, but simple eval). Now: the
>>smaller set of scores, the smaller chance of getting new best move based on
>>positional factors. Also some dirty pruning/speeding techniques, like shifting
>>score up at root to blur out small score differences, or assigning a value to
>>null move (shifting alpha) will produce nice speedup, but at the cost of
>>reduced 'positional granularity'.
>>
>>What we can say after Ernsts experiment is: "Some unknown factors can produce
>>diminishing returns". Not much, imo.
>
>Of course, everybody is entitled to his own opinion. But the
>nature of your "scepticism" makes it virtually impossible to
>learn more than "not much" from experiments with _any_ decent
>and moderatly complex chess program (see above).
>
>It is well-known that empirical studies can only provide proofs
>of existence. They can only support but never establish a
>general fact.
>
>=Ernst=


I didn't mean to be *that* sceptical :)

I just want to point out that 'diminishing returns' may depend on granularity of
eval() scores.
Let's return to your 'going deep' experiment and assume 2 extremal, hypothetical
evals:

M: material only eval (it can return only few values, all positions will map
into this set)
P: eval, which for each position assigns *unique* value.

All programs evals are somewhere in between: one will map positions in 1000
values, the other in 100000. Then first program after seeing 10M positions will
show diminishing returns (no new scores would be returned), while second, having
greater score space will keep coming up with new 'positional bests' after 100M
nodes.

It is intuitively obvious for me, that program using M eval will find less and
less 'new best' moves from ply to ply, while P eval, having no 'degenerated
positions', will find much more new best moves (and will eventually show
diminishing returns effect at far greater depths if any).

"Intuitively obvious" is based on simple experiment I tried once to produce even
dumber, 'tactical' version of my program :

Instead of eval() return eval() | xx, where xx = 3,7,15,31,63,127. Note
tremendous tactical speedup. Less researches (at root and in the tree) is a hint
for me that you should see 'diminishing returns' effect at bigger xx.
When I did that (on a rather small set of positions) I counted non-zero
researches and nodes till fixed depth. Time permits I will re-do this and now
focus on 'new best' counting.

-Andrew-



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.