Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Why use opening books in machine-machine competitions?

Author: Reinhard Scharnagl

Date: 07:24:47 11/25/03

Go up one level in this thread


On November 25, 2003 at 09:34:28, Sune Fischer wrote:

Hi Sune,

>On November 25, 2003 at 04:02:45, Reinhard Scharnagl wrote:
>
>>On November 25, 2003 at 03:59:07, Richard Pijl wrote:
>>
>>Hi Richard,
>>
>>[...]

>>> So, you should probably go all the way (forbid all precomputed results in any
>>> form, also the hardcoded variants without external files) or allow everything
>>> (that is legal, e.g. considering copyrights), like it is now.
>>
>>... as I proposed in [http://www.rescon.de/Compu/schachfair_e.html]:
>>
>>The size of a chess engine including its used persitant data has to be limited
>>on approximately 1/4 MB, based on a strongly compressed form which could be
>>achieved using high-quality packers. This has different reasons. It is not to
>>provoke any competition e.g. in hiding pre-compressed components, and also the
>>choice of a programming language thereby might have less effect on the relevant
>>measuring size. System DLLs (without any relationship to chess) naturally
>>should not been taken into account.

>I don't like that proposal.

So simply make a better one.

>1/4 MB is completely arbitrary, with some compilers you get close to this just
>with a "hello world" program.

Please explain, even after packing it into a *.RAR file? Hardly to believe!

>I prefer to use C++ which in my experience has a tendency to produce bigger
>executables, should I really be forced to use C or even assembly just to comply
>with some silly size of binary limitation?

You additionally will notice, that C++ compiled executables will pack much
better than others produced directly via assembler. So packing before measuring
the size is a really fair method.

>That won't help A.I. one bit I can tell you that.

Forcing to use only strictly reduced means always helps to make things more
efficient (and overmore: comparable).

>Also you should re-think that 'persistant' data idea, an algorithm is also
>persistant data.

You are absolutely right. The distinction between persistant data and persistant
algorithms is only semiotic and completely irrelevant in this case.

>If I write in my code:
>
> if (ImUnderSeriousAttack())
>    score -= huge_danger;
>
>then that is persistant data.

Correct!

>All kinds of knowledge is persistant,
>whether you get the result from an algorithm or a table
>is just a matter of speed tuning.

That is why it would not make any sense to distinguish between code and data
when measuring.

>A.I. research is actually about becomming smarter based on experience, so you
>have a need to store things, e.g. history tables is a little A.I. in the search.

If such tables will be filled dynamically, they will be measured by zero. So I
do not see any problem.

>IMO the most interesting (not necessarily the best) solution would be if the
>programs started without book and slowly generated them by experience.

That is one reason why I am often arguing for the inreased use of FRC.

>Even more interesting if they started with no knowledge of the game rules and no
>algorithms and weights set to zero, but you have to start *somewhere* :)

I have not understood your last idea, sorry.

Regards, Reinhard.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.