Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: chess and neural networks

Author: Vincent Diepeveen

Date: 03:40:33 07/07/03

Go up one level in this thread


On July 06, 2003 at 18:33:33, Ingo Lindam wrote:

>Hello Vincent,
>
>thanks for the doctor title. Although I feel I got this title more by confusing
>you than by convincing or impressing you. May I try to explain atleast some
>statements to make sure that I am neither awake for days nor on drugs.
>
>On July 06, 2003 at 17:18:17, Vincent Diepeveen wrote:
>
>>On July 06, 2003 at 08:38:35, Ingo Lindam wrote:
>>
>>>On July 06, 2003 at 08:00:48, Uri Blass wrote:
>>>
>>>>I believe that the biggest advantage that can be achieved in evaluation is not
>>>>in changing the initial static evaluation but in learning to change the
>>>>evaluation during the game based on the results of the search.
>>>>
>>>>I also do not believe that what humans know is the target and the target should
>>>>be better than what humans know.
>>>>
>>>>programs found better evaluation than humans in backgammon and program may find
>>>>better search rules than humans in chess not because programs are smarter but
>>>>because programs may do trillions of calculation to learn and humans cannot do
>>>>it.
>>>>
>>>>Uri
>>>
>>>That is an interesting idea and should really offer a lot of chances.
>>>Nevertheless, I would also fear some risks and would be already happy when the
>>>machine would first learn from finished games and the analysis of finished games
>>>(wich also includes a lot of search trees) and modifies the evaluation just on
>>>the basis of the current position and the experiences made before the current
>>>game. Learning from the searchtree in a completely new position might make sense
>>>ofcourse when there are some reliable evaluation results.
>>
>>It says from: Ingo Lindam
>>
>>I interpret this as you stating that letting it learn the search is a joke
>>because it can't even trust its own evaluation yet, therefore it is already a
>>too big of a challenge to do.... ...but then the rest of the lines is not
>>parsable by me. Yes what exactly?
>
>As I understood Uri he suggests (or just dreams of) to let the computer
>change/learn the evaluation as a result of a running search. So assuming the
>computer visits some million positions within that search, for you as well as
>for me remains the question, what are the hard facts the computer can learn
>from. And therefore such a learning would be more dangerous than promising.

I guess if it would learn from its own search it only will learn to exaggerate
what it already intentionnally knew in its hand made evaluation.

Let's just refer to the actual performance of TD learning (which of course is
not the same as the described performance, which is the usual story needed to
get your doctor title which the guy implementing a learning that actually didn't
do the 100% same as previous learners did earn a lot) to proof my point here.
Each run it would within not too many games exaggerate slowly its king safety
that much that it gave away *always* in the end material for just a few patzer
moves in the direction of the opponent king.

>Besides that I just admitted that if there are reliable results in a search
>tree, as e.g. some mate scores, you could let learn the computer from that (but
>ofcourse it would have probably no influence on the current evaluation or
>atleast not more than the mate scores itselves).
>
>Nevertheless I am convinced, that it is possible to let computers learn
>evaluation criteria (and weights for or dependencies between several of them)
>from a huge amount of high level games and their results and also from analysis
>made on these games.

The interested intelligent reader is asking you now in which magnitude of
learning experiments you expect such behaviour. Are we talking about 10^20
experiments here or 10^120 ?

Just to get the picture clear here.

Because the difference between the intelligent reader and the AI doctors is the
concretizetion in numbers of the actual viewpoint. The audience prefers results
in short before the doctor has died :)

>
>>You jump from 'result' to 'current position' in an unexplainable way.
>>
>>How is this possible?
>
>I don't see me jump from 'result' to 'current position'... but I agree, that not
>all my thoughts are formulated in a way that can't be misunderstood...
>
>>Which formula gives you from the result of a game the 'current' position where
>>evaluation has to be modified for?
>
>When I said current position, I always meant the current position in a running
>evaluation process/game. Therefor I can't have a result (because game is still
>running). For finished games I see a result just as a good/bad experience with
>all positions leading to the result. I a feature/ the combination of some
>features of a position is not always bad you should not only make bad
>experiences in a huge amount of games where that feature occurred. This is not a
>formular and not an algorithm... and also nothing leads from 'results' to
>'current positions'.
>
>>I'm missing a major step there which no AI dude so far managed to convert into 0
>>and 1, or in short a concrete working algorithm within say 10^10 calculations.
>>
>>>Ofcourse the machine
>>>could also adapt the evaluation on the basis of some features of the searchtree,
>>>as the chances for the opoonent to make mistakes or the number of features
>>>occure in parts of the search tree don't fit to the abilities of the opponet,
>>>...
>>
>>This dangerous step i'll skip as we never got here in the first place by normal
>>logics as no one managed to write an actual working
>>ConcludeWhereIsProblemFromGameResult() function :)
>>
>>>I just would claim the machine not to change everything in evaluation just on
>>>basis of the search tree.
>>
>>Now this statement made sense the first time i read it, but i know many will
>>disagree. Most will say that the search tree is a result from the evaluation and
>>they are right. It's only evaluation that matters in the end. Searching is a
>>pretty simple thing compared to evaluation.
>>
>>The shape of the search tree T using a search method M is only influenced by
>>evaluation E applied to it.
>>
>>There fore it is:
>>
>>  M(E) = T
>>
>>>If the machine calculates to long within search space
>>>it might occur that it throws away everything it ever learned about chess
>>>before, claiming "now I got the real view onto chess and chess strategies, I
>>>stop trusting all the old masters from now on, I don't trust my trainer anymore,
>>>I don't trust my programmer anymore, it's me that has the only right and
>>>ultimative view onto chess, don't stop me now... I am just reinventing chess..."
>>
>>The first line made sense to me, but then i lose you. It would make more sense
>>when the 'posted by' was Rolf...
>>
>>I would therefore swear you went out whole evening and came back at 7 AM or
>>something home then at 8:38 you posted this statement :)
>
>Well, I expect an objection by Rolf.

Well Rolf is always invited to get here. Not far away here (5KM) there is a
beautiful spot where many Germans are lying down. I figure there is plenty of
space there for Rolf to join his countrymen. He'll have a good oversight of the
landscape then, because it is one of the few hills in whole Netherlands just in
front of the strategic Rhine river.
It was like that too in the maydays of 1940. 4 years later a bit more upriver
for the same reason thousands of English and Poland heros joined those Germans.

I am sure that despite a lot of dutch noise preventing the heros from joining
the Germans, that a real chessplayer would not have done it.

I am 100% convinced that an artificial neural network would have made the same
mistake. As 'noise' isn't going to impress it.

O i forgot to mention, rolf can forever continue talking about nazis with his
countrymen lying down there. Because they all were.

>For my part...: This secenario is just an exaggeration (and influenced by my
>kind of humor) of the danger I discribed above, when a machine changes (all)
>it's evalution just by occurences and evaluations of the positions in the search
>tree...
>
>>>Such appearance of chess machine insane might be a very interesting experience
>>>of computer conscience (?)... but I would prefer happening it under control and
>>>not in a tournament... Although, it might make computer chess a record breaking
>>>TV attraction...
>>
>>It is really interesting how from some crap statements of Uri you can finish in
>>the last sentence with 'record breaking'.
>>
>>In any way i would award you doctor title in AI for free, because within a few
>>lines you exactly write down what i would expect from someone who wishes to get
>>doctor in AI to write down in his thesis ;)
>>
>>>Internette Gruesse,
>>>Ingo
>
>Internette Gruesse,
>Ingo



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.