Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hello from Edmonton (and on Temporal Differences)

Author: Sune Fischer

Date: 15:16:30 08/05/02

Go up one level in this thread


On August 05, 2002 at 16:44:18, Vincent Diepeveen wrote:

>>It is a non-trivial exercise to do, and I don't know every character of
>>Crafty's code. Besides I would rather spend time on implementing this in my own
>>program and get an edge :)
>
>The hard truth is that there only exists a few 'academic' approaches
>which all are complete overoptimistic presented results. In tuning chess
>programs there has not been a single success in absolute terms.
>
>No Knightcap didn't play better after tuning. It played worse.

Where does it say that, I think I've read the whole article 2 times by now, I
must have missed it?
Though I do remember something about 50-100 elo better than human tuning...
(not sure I would be that optimitic).

>Yet the fact that he managed to make A FORM OF TUNING, it means he
>already did something great, because majority is talking about
>potential of it without realizing what they say!
>
>I didn't forget it of course. In fact i've had whole courses that have
>to do with neural networks. Where i had to hear the same crap from
>someone who doesn't know a thing on real world applications.

Oh, like what?

>the hard facts in game playing world is simple: we need accurate values.
>You don't design a pattern in order to let some 'auto' tuner fuck it
>completely by giving a -y instead of a +x value to it. That's a waste of
>time of designing such a pattern then!
>
>>>Finding the best values as a human isn't trivial. It sure isn't
>>>for programs. But humans use domain knowledge your tuner doesn't.
>>
>>KnightCap was too interesting a project not to follow up on, I'm very surprized
>>it hasn't been done already.
>>To see people write that it doesn't work when a) KnightCap proved it _did_ work,
>>and 2) they have not even attempted it themselfs, is very funny to me.
>
>Understand me wrong. i am not saying he wasted his time. Instead i feel he
>did a good job. Nevertheless, let it be clear that he achieved nothing
>concrete. He managed to make something and it didn't work.

But where did you read that? Is there a second article where everything suddenly
fell apart?

>that's usually a more valuable source of information than someone who
>is just 'guessing' something doesn't work.
>
>Of course like all scientists he wasn't allowed to mention it didn't work,
>so he started with some kind of random set and then presented the learned
>set.

Oh, so you know the *truth* here, he told it to you and didn't put it in his
article because he wanted to falsify the results? So _why_ did he tell it to
you, he must have been pretty dump not keeping it a secret then.
Something doesn't add up.

>That means it at least *improved* itself upon random values.
>
>Nevertheless for me a learner only works if slowly better and better
>local maxima in the parameter set are getting found. This is a major
>problem for all learners so far. Not a single learner that is not
>somehow brute forcing its learning is capable of slowly improving its
>local maximum.
>
>Amazingly not a single scientist who made tuning so far has done effort
>to even *try* to show that each so many testsets the local maximum improved.

Besides knightcap, how many did try it?

>That says itself enough about the quality of the research.
>
>It says even more about the usefullness for real world applications where
>accuracy is required!

You are contradicting yourself. If the quality of the research is low, then you
can't use the results for proving that it works nor to prove it doesn't work.

-S.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.