Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How I Learned to Stop Hating 141

Author: Stuart Cracraft

Date: 08:43:21 09/05/04

Go up one level in this thread


On September 05, 2004 at 02:09:34, Uri Blass wrote:

>On September 04, 2004 at 23:54:50, Stuart Cracraft wrote:
>
>>On September 04, 2004 at 18:40:28, Uri Blass wrote:
>>
>>>On September 04, 2004 at 17:35:43, Stuart Cracraft wrote:
>>>
>>>>On September 03, 2004 at 18:14:56, Uri Blass wrote:
>>>>
>>>>>On September 03, 2004 at 17:30:18, Stuart Cracraft wrote:
>>>>>
>>>>>>On September 03, 2004 at 16:52:34, Andrei Fortuna wrote:
>>>>>>
>>>>>>>On September 03, 2004 at 15:41:42, Stuart Cracraft wrote:
>>>>>>>
>>>>>>>>On September 03, 2004 at 05:08:01, Andrei Fortuna wrote:
>>>>>>>>
>>>>>>>>>This makes me think how funny would be if two engines play, engine A would have
>>>>>>>>>all kinds of those extensions in case of check etc, engine B would have
>>>>>>>>>implemented a good eval function (with many terms regarding positional play) and
>>>>>>>>>in the match engine B leads engine A towards the positions where engine A
>>>>>>>>>discovers those mate attacks and so forth ahead of engine B, but he is on the
>>>>>>>>>losing side due to B's positional play.
>>>>>>>>
>>>>>>>>I think this kind of self-play event and auto-tuning and genetic algorithms
>>>>>>>>in general are under-estimated by the computer chess programmers. Just
>>>>>>>>because good results haven't been generated and there is no easy "elixer"
>>>>>>>>doesn't mean we shouldn't be trying it.
>>>>>>>>
>>>>>>>>Think of the time-savings. Heck, your auto-tune doesn't have to produce
>>>>>>>>Bob Hyatt hand-crafted Crafty evaluation coefficients for terms you have
>>>>>>>>to find and prove first -- but even if you don't produce something other
>>>>>>>>than what you are doing now but saving a lot of time, then you have profited
>>>>>>>>more.
>>>>>>>
>>>>>>>
>>>>>>>Hi Stuart,
>>>>>>>
>>>>>>>Wasn't talking about auto-tuning, just was thinking that if someone invests in
>>>>>>>evaluation function versus someone who invests in various extensions - the
>>>>>>>former wins the game. Of course in reality programmers usually take care of both
>>>>>>>areas ...
>>>>>>>
>>>>>>>Andrei
>>>>>>
>>>>>>Yes -- I understand you weren't -- but there is a big savings if you do
>>>>>>it right.
>>>>>>
>>>>>>For me, it is worth investigating as I don't want to spend the rest of
>>>>>>my life tuning evaluation functions.
>>>>>
>>>>>I believe that I can earn more from adding new knowledge relative to tuning.
>>>>>
>>>>>Tuning can be done not automatically based on watching problems that repeat in
>>>>>games.
>>>>>
>>>>>Uri
>>>>
>>>>If a problem repeats in games and the program loses, then tuning will
>>>>try various things to prevent it.
>>>>
>>>>Look at Slate's "mouse" program and its learning capability. Highly
>>>>effective yet simple. No need to even adjust coefficients. Just store
>>>>a hash and a move in an avoid file.
>>>>
>>>>Imagine what tuning could do.
>>>>
>>>>I believe both Schaeffer and Marsland have very high expectations for
>>>>the future of tuning via various methods.
>>>>
>>>>Stuart
>>>
>>>The problem is that there are things that you simply need to add new knowledge
>>>if you want to fix them.
>>>
>>>It is not about changing parameters and I do not see how it can be done
>>>automatically.
>>>
>>>Uri
>>
>>Absolutely concur that nothing, save neural, could discover new associations.
>>
>>But once you have identified a term in a linear or non-linear context, then
>>the weight for it -- THAT is tunable.
>>
>>Certainly parameters cannot not easily be added from nowhere automatically.
>>We programmers are needed for that.
>>
>>However, they can be dropped by auto-tuning with the evaluation function
>>eventually either zeroing them out or in such a way after the auto-tuning
>>to rebalance everything in a way that would zero out any terms that could
>>be zeroed out, thus dropping non-essential knowledge.
>>
>>It is at least as complicated to setup something solid and general.
>>
>>Have you read Baxter et al and their KnightCap -- please explain that
>>success story.
>>
>>From 1600 to 2500 with one blip on an opening book and probably some
>>more blips on repeat wins by players playing the same moves over and
>>over is an outstanding success story.
>>
>>Do I have it wrong??? Has anyone repeated their success?????
>>
>>Stuart
>
>I do not think that knightcap is a success story.
>Movei is clearly better than Knightcap with no automatic tuning.
>
>Uri

Uri,

1600 to 2500, which is documented on the chess server, is not a success
story?

You have a strange sense of the historical viewpoint!

Why is 4 1/2 class increases with unattended (mostly) operation not significant?

I think they curtailed their effort early and could have gone higher than
2500 with further autotuning.

I disagree with you.

Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.