Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Why use opening books in machine-machine competitions?

Author: Reinhard Scharnagl

Date: 08:28:45 11/25/03

Go up one level in this thread


On November 25, 2003 at 11:00:19, Sune Fischer wrote:

>On November 25, 2003 at 10:24:47, Reinhard Scharnagl wrote:

>>>1/4 MB is completely arbitrary, with some compilers you get close to this just
>>>with a "hello world" program.

Hi Sune,

>>Please explain, even after packing it into a *.RAR file? Hardly to believe!

>>>I prefer to use C++ which in my experience has a tendency to produce bigger
>>>executables, should I really be forced to use C or even assembly just to comply
>>>with some silly size of binary limitation?

The limitation is meant to apply AFTER zipping or raring the engine.

>>You additionally will notice, that C++ compiled executables will pack much
>>better than others produced directly via assembler. So packing before measuring
>>the size is a really fair method.

>I think how good the zipper or compiler may be is irrelevant to the level of
>A.I. in the exe.
>
>More lines of source code will in general produce a better more sophisticated
>A.I. capable program, so I believe the effect will be the opposite of what you
>intend.

A higher level language line will produce more code than an assembler line.
Packing will reduce redundances in files. The higher the programming language,
the more redundaces will be produced in the executable.

So better to replace your believes by test results.

>>>That won't help A.I. one bit I can tell you that.

>>Forcing to use only strictly reduced means always helps to make things more
>>efficient (and overmore: comparable).

>Size of binary is not an interesting metric as far as I can see.
>Like I said intelligence and size is probably inversely related, if anything.

What would be better? I would like to hear!

>>>A.I. research is actually about becomming smarter based on experience, so you
>>>have a need to store things, e.g. history tables is a little A.I. in the search.

May be you misunderstand A.I. or I do. For me there is nothing ARTIFICIAL in
gathering HUMAN experiences into tables. Instead of this I argue for problem
transformation and engine related procedures.

>>If such tables will be filled dynamically, they will be measured by zero. So I
>>do not see any problem.

>The programme would have to start over from scratch every new game, not very
>ideal, IMO.
>The main idea of A.I. is that it gets better from game to game (no human
>interference inbetween), thus it needs to be stored in *.learn files of some
>kind.

Why that? It could store its results into persistant data - which of course
would be to be additionally measured then too.

>When a program is released it may come with some data files, and it's impossible
>for us to know if that data has been generated by the programs learning
>algorithm or filled by hand from a human.

That is obviously clear. It is additinally nearly impossible to decide the
percentage of data and code. But this does not really matter. Only true A.I.
structures will for long make engines strong, not the impossible copying of
human methods, which we are not at all able to describe exactly.

>>>IMO the most interesting (not necessarily the best) solution would be if the
>>>programs started without book and slowly generated them by experience.

>>That is one reason why I am often arguing for the inreased use of FRC.

>Here we agree :)

I hope for others to also join!

>>>Even more interesting if they started with no knowledge of the game rules and no
>>>algorithms and weights set to zero, but you have to start *somewhere* :)

>>I have not understood your last idea, sorry.

>If you take away all resistant knowledge there is no program left :)

I do not argue for a zero limit, but for a 1/4 MB limit (for packed data).

Regards, Reinhard.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.