Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: "It's alive, I tell you! It's alive!"

Author: Michael Yee

Date: 11:50:41 05/12/05

Go up one level in this thread


On May 12, 2005 at 12:58:00, Matthew Hull wrote:
>On May 12, 2005 at 12:31:05, Steven Edwards wrote:
>>On May 12, 2005 at 12:18:10, Matthew Hull wrote:
>>>On May 12, 2005 at 12:09:11, Steven Edwards wrote:
>>>>On May 12, 2005 at 11:54:16, Matthew Hull wrote:
>>
>>>So, the initial test shows some success.  You also indicate in another post that
>>>a more comprehensive problem set will be the next step, at which point you will
>>>have a mate-attack organism thingy with which to work.  How many other organisms
>>>do you estimate will be needed to get a basic cognitive process to start taking
>>>over from the toolkit?  Do you have a preliminary list of these in mind?
>>
>>I have it working on the 1,001 position suite BWTC.  Later, I'll try automated
>>construction of a multithousand position suite from appropriate PGN data.
>>
>>Symbolic has a knowledge sequencer named KsSurveyor whose job is to identify
>>strategic themes in a position.  Each recognized theme is posted as a Theme
>>instance in the Instance Database in the position search tree node for that
>>position which is later used by the rest of the planning process.  My idea at
>>the moment is to have (at least) one species/organism for each theme.  This
>>organism will be used to determine theme matching, any moves that will promote
>>the theme (along with ranking), and (via a one ply search) help determine
>>countermoves that work against the theme.
>>
>>The list of themes will be stolen from various chess books I have.  Some themes
>>like MateAttack are simple in that they aren't parameterized.  Other themes will
>>have target, region, and sequencing parameters.
>
>
>I see this process as similar to NN technology in terms of the utilization of
>processing resources.  The bulk of the compute resource are consumed ahead of
>time, and the generalized "understanding" stored for later reference.  The
>application of these learned things will require further effort in a game, but
>most of the effort will have already been computed and stored in generalized
>form.  A traditional AB searcher remembers almost nothing and must re-compute
>it's "knowledge" frequently and afterward remembers nothing, except what may be
>in the transient hash table.  The exception is the EGTB, whose compute resources
>were already consumed and don't need recalculation.  The EGTB is the only thing
>"remembered" between games, except some basic position/book learning.  But even
>the hash table and EGTB are not really "understanding", but just rote memeory.
>
>The conservation of computing effort is what I like most about your approach,
>and other approaches like NN.

The observation about storing knowledge/generalization ability is nice. But I
think there are some key differences between neural networks and genetic
algorithms.

NNs are merely a class of functions/models with parameters. (Well, their
structure happens to be able to construct new features from more basic ones,
which is neat.) The parameters can be fit/learned using backpropagation or any
other optimization technique.

A GA is merely a pretty robust optimization technique that can be used to fit
the parameters of a model (e.g., a NN), or, in general, optimize any objective
function.

In Steven's case (I think), he chose subsets of microfeatures (templates) as his
family of models and used a GA to optimize an objective function (test suite
performance) over possible models. A key difference between his family of models
and NNs is that NNs are hierarchical.

Michael



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.