Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Neural networks in the year 2400

Author: Robert Hyatt

Date: 21:44:50 07/04/03

Go up one level in this thread


On July 04, 2003 at 21:56:53, Vincent Diepeveen wrote:

>On July 03, 2003 at 19:41:57, Robert Hyatt wrote:
>
>>On July 02, 2003 at 12:35:38, Vincent Diepeveen wrote:
>>
>>>On July 01, 2003 at 13:32:19, Ralph Stoesser wrote:
>>>
>>>>Hello *,
>>>>
>>>>Why no top engine uses neural networks for positional evaluation in non-tactical
>>>>situations? Are there interesting publications about neural networks and chess
>>>>programming?
>>>>
>>>>Ralph
>>>
>>>because
>>>  a) NN are too slow
>>
>>20 years ago chess programmers were saying this about _all_ high-level
>>languages, as they wrote in assembly.
>>
>>>  b) they do not work very well for situations they are not trained for
>>>     and in chess you always explore new positions which are not trained yet,
>>>     which is an easy thing to understand once you understand that chess has
>>>     10^44 positions and you could train perhaps for 10^2 positions at
>>>     most very well so missing around 10^40 somewhere.
>>
>>That's a training issue.  It isn't unsolvable.
>
>I keep hearing in my ears the dissappointment of the reporter after asking Jaap
>v/d Herik (apart from professor in computer science also has a professorchair
>Computer Law) when we finally would see those fantastic stories about computers
>speaking law reality.
>  "I expect that the first computer speaking law will be there at around the
>year 2110 and by 2150 they already will be working fully automated"
>
>After reasking whether he meant 2010 or 2110, Jaap confirmed that he indeed
>meant a date long after his death.
>
>That's what i keep hearing with (A)NNs too.
>
>Every person will be easy figuring out that when non-random training will be
>applied to teach a neural network to play chess, that the training will require
>about 10^120 operations to train. 1 operation is pretty complex however. It
>involves not only a tuning but also testing a big number of positions.
>
>Therefore picking a date long after Jaap's computer speak law, is a safe guess
>when ANNs will be capable of solving chess. Can we agree upon the year 2200?
>
>Or do you guess that 10^120 will be reached much sooner by a brilliant
>innovention in the year 2080? By then i will be turning 107 years, and knowing
>one of my grandfathers made it to 101 years old, so there is a very remote
>possibility that i will be able to confront your family by then with this
>statement.
>
>Therefore i advice you to do the same like JAAP and bet on somewhere in 2100.
>
>>
>>>  c) the persons that say they work for similar situations are on drugs
>>
>>I can show you an aircraft tracking ANN that worked just fine.  Very complex
>>problem tracking multiple targets from one radar image to the next.
>
>I know 1900 players on steroids that can beat fide masters...
>
>Wait... ...you didn't say that your ANN on steroids beats hand tuned software.
>I have to give you that. You're improving Bob!

No idea what you are talking about.  Of _course_ it does its task.  That
was the purpose of doing the research years ago.  It _worked_.  And while
it is also possible to solve the problem with normal programming, the
normal programming solution was _no better_.  And the ANN approach as
nice features that a dedicated program lacks.


>
>If you next time claim the opteron to be 70 bits because additionally to being a
>64 bits processor it's also setting a few flags, then perhaps we would be longer
>enjoying Kerrigan's cool new wordings for you, as it will take him more time
>then to convince even the utmost idiots here that you are already swallowing
>alzheimer drugs.
>
>Most important is that you 'forgot' to mention that airplanes always should show
>the same shape at radar, which is exactly the case with voices too and which is
>exactly what has been proven that NNs can do. They can, when trained well
>recognize something that is 100% similar.

Aircraft don't show a "shape" on radar.  They show a "strength of return" that
varies as the plane(s) bank, change direction and so forth.  And the image
isn't continual, but gets redrawn as the antenna rotates.  The problem is
connecting the images between rotations.  It isn't easy.


>
>However in chess that happens to be not the case. In chess you have the problem
>that every position that gets searched is different from what it is trained for.



Again, that is crap.  Otherwise _you_ can't play the game because you
_certainly_ can not search and remember 10^anything positions.  So either
humans can't play chess, or else it is _possible_ for ANNs to learn to do
so.  Can't have it both ways.  Humans don't need to see _every_ possible
position to learn how to play.  Neither does an ANN.  That is the _Point_
for ANNs, of course.

You should get off that "if I can't do it it is impossible and will never
be done" crap.


>
>So that's why this statement on steroids from you doesn't proof anything about
>NNs in chess.

"statement on steroids"???


>
>>>  d) training for chess takes more time than solving chess brute force costs
>>>     In fact my approximation is 10^120 to train for chess a NN, under
>>>     the condition that the NN has all the relevant knowledge. That is quite
>>>     a big problem when you consider chess is x.10^43 according to
>>>     latest findings.
>>
>>
>>Then humans can't play chess either.  Because we have the same sort of NN
>>training problem.  bottom line is that exhaustive training either isn't
>>required, or else humans can't play chess.  One or the other _must_ be
>>true.  And since humans do play the game well...
>
>You are assuming that NN is using the same technology like the human brain is
>build up from. Even that wouldn't be a problem, if they would more or less have
>the same functionality. However also that is not the case as research has
>pointed out already very clearly in the 80s and 90s. The simple model of a
>braincell somewhere in the 50s was simply dead wrong. ANNs still suffer from
>that problem. Go ask a brain surgeon for a more recent model of what a braincell
>does.

Go ask an ANN person what the "more recent models of ANNs" look like.


>
>Additionally we have a few billion brain cells and i *wonder* how you plan to
>train an ANN as big as *that*.

I wouldn't need to.  90% of that stuff is unimportant.  I don't need my
ANN to regulate hormones to control normal bodily functions.  I don't
need emotions.  I don't need visual processing.  I don't need speech.  I
don't need 90% of the stuff that the human brain does, in order to play
the game of chess.  That is the point.


>
>Let's say the year 2400 for a 2 billion neuron multilayer with loops and central
>collection and refeeding outputs to the input ANN?

Who cares?  We don't need one that big to play chess, I'll bet.  I doubt
you use all 2 billion of your neurons to play chess.  On second thought,
you probably use more than most of us, which would explain some of your
problems in being unable to understand how things can work for others but
not yourself.




>
>Best regards,
>Vincent



This page took 0.03 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.