Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Node frequencies, and a flame

Author: Anthony Cozzie

Date: 17:48:23 10/16/03

Go up one level in this thread


On October 16, 2003 at 19:11:20, Dann Corbit wrote:

>On October 16, 2003 at 18:49:55, Anthony Cozzie wrote:
>
>>On October 16, 2003 at 18:07:08, Dann Corbit wrote:
>>
>>>On October 16, 2003 at 15:25:43, Steven Edwards wrote:
>>>
>>>>On October 16, 2003 at 09:20:20, Robert Hyatt wrote:
>>>>>On October 16, 2003 at 09:06:17, swaminathan natarajan wrote:
>>>>
>>>>>>about 900 n/s
>>>>>
>>>>>It had better be faster.  IE a single xeon runs over 1M nodes
>>>>>per second.
>>>>
>>>>How far we have come!
>>>>
>>>>I seem to recall Slate and Atkin reporting that their program Chess 4.5 ranged
>>>>between 250 and 600 Hz on a CDC 6400 (roughly equivalent to an Intel 33 HMz
>>>>80386+80387), and this was enough to give some humans a decent challenge (back
>>>>in the mid 1970s) along with winning the world CC championship.
>>>>
>>>>Processing speed has increased by a factor of forty or so in the past three
>>>>decades.  Are the programs/platfrom combinations of 2003 much more than forty
>>>>times "better" than that of 1973?  How much of the "better" ratio is due to
>>>>improvements in algorithms?
>>>>
>>>>More specifically, if one were to take Crafty or a similar program that has the
>>>>NWU Chess 4.x as a great grand uncle and run it on a 33 HMz 80386+80387 class
>>>>machine, how would it fare against Chess 4.x running on a true clock speed
>>>>emulation of CDC 6400 hardware?  (The last real CDC 6400 was powered off long
>>>>ago, perhaps in the mid 1980s if I remember correctly.)
>>>
>>>I suspect that in a 100 game match, Crafty would win 100 to zero.  We could
>>>reverse the question.  Take the program of long ago and compile it with modern
>>>compilers.  Now try the experiment on really fast hardware.  That is a more
>>>important question to me.  I don't care how crafty would perform on a 386
>>>because I have no intention of running it on a 386 at any time or for any
>>>reason.
>>>
>>>>I assume that the more modern program would win most of the time, but it
>>>>wouldn't be that much of a performance mismatch.  If today's programs on today's
>>>>hardware are 1000 Elo stronger than the 1973 CC champ, how much of that is due
>>>>to better algorithms vs better hardware?  I'll take a guess and say that thirty
>>>>years of advances in software is responsible for no more than 200 Elo
>>>>improvement and perhaps only 150 Elo points.  And most of the software
>>>>improvement is due to only a few new ideas:
>>>>
>>>>   1. PVS/zero width search
>>>>   2. Null move subtree reduction
>>>>   3. History move ordering heuristics
>>>Insignificant
>>>
>>>>   4. Tablebase access during search
>>>Insignificant
>>>
>>>>   5. Automated tuning of evaluation coefficients
>>>Less than insignificant.  Nobody has ever exceeded the hand tuned values.  Right
>>>now, if you do this, it will make your program play badly.  I also suspect that
>>>the Deep Blue team harmed their chess engine with this approach.
>>>
>>>This one is the most important:
>>>#0. Hash tables and move ordering
>>>
>>>Without this, you won't achieve #0:
>>>#1. Better evaluation
>>>
>>>>Computer chess was supposed to be the Drosephilia of AI.  If so, CC theory is
>>>>still in the larval stage and I don't see wing buds popping out any time soon.
>>>>Where are the CC planning engines?  Where are any general pattern recognition
>>>>algorithms in use?
>>>
>>>Because the hand-tuned algorithms are superior.
>>>
>>>>What program has real machine learning?
>>>
>>>Lots of them.  Unless you mean genetic style evolution of strength or neural
>>>nets.  Both of those have been tried and are flops (as of this date and for
>>>those attempts that have been published).
>>>
>>>>Which programs are
>>>>adaptive and can re-write better versions of themselves?
>>>
>>>Octavius springs to mind.  It's a wimp.
>>>
>>>> How many programs can
>>>>converse in natural language and answer the simplest of questions as to why a
>>>>particular move was made?
>>>
>>>That is 10 years off in the future.
>>>
>>>> Where are the programs that can improve based on
>>>>taking advice vs coding patches to the Evaluate() function?
>>>
>>>There are none of those.  Nimzo's programming approach could be considered
>>>similar to this, except that the language is typed and not spoken.  He uses a
>>>metalanguage that describes chess (IIRC).
>>>
>>>>And the big question: What has CC done for AI in the past thirty years, and what
>>>>can it do for AI in the next thirty years?
>>>
>>>The Deep Blue chess match is the most famous chess match of all time.  The
>>>strongest human player was beaten in a game of exponential complexity.
>>>
>>>It is not a good idea to try to predict the future.  Even mathematically
>>>speaking and when you have a lot of data points, extrapolations are very
>>>dangerous.
>>>
>>>>Hint: Any remotely correct answer does not include the phrase "nodes per
>>>>second".
>>>
>>>I disagree.  Hans Moravec's book shows that in 30 years, our CPU's will be
>>>smarter than we are.  And why is that?  Not due to superior algorithms, but
>>>strictly due to Moore's law.
>>>http://www.frc.ri.cmu.edu/~hpm/talks/revo.slides/2030.html
>>
>>
>>I have some serious problems with that slide.
>>
>>1. Moore's law is NOT A LAW.  Its going to come to an end by 2020, if not
>>earlier.
>
>Not a chance.  It will continue to accelerate.  Of course, I could be wrong.

OK, this is simply wrong.  Moore's law postulates continued exponential growth
in integrated circuit transistor density.
[http://www.intel.com/research/silicon/mooreslaw.htm] Clearly it is impossible
to make a transistor with less than 3 atoms, so it cannot continue forever.  If
I remember my quantum theory correctly, an atom several A in size, so we are
getting close to the end of Moore's law.   Already CMOS is on its last legs.
Maxwell's laws have caught up to it - the only thing intel or amd can do with
all the transistors their process guys have been giving them is build bigger
caches.  There is a lot of research going on here, but CMOS is still not going
to take us past 2020 (in terms of continuing to shrink).

>>2. According to his graph, a 486/DX2 is equal in intelligence to a spider.  I
>>think not.  Even the simplest biological organisms have motor control that is
>>better than anything we can do today.
>
>Check out Asimo.
>
>There was also a show I saw where a German autonomous helicopter flew to a scene
>where mock-up accidents occurred.  It correctly identified all of the problems.

Could you post a link?

>>Its pattern recognition is far ahead of
>>the best we can do.
>
>That's because it uses a neural net.  Neural nets are designed for pattern
>recognition.

We don't know what it uses, really :)  Not to mention that noone really
understands neural nets.  If you build a neural net that recognizes a pattern,
you really have no idea what is going on (other than that the neural net works:
you have no idea *how* or *why* it works).  But my point is that, while
computers are very good at certain things, there is much work to be done in
other areas.

>>we have a long way to go in terms of signal processing before we can
>>even do simple things, let alone reason abstractly as a human does.
>
>I think they are already accomplishing this.
>http://www.ifi.ntnu.no/grupper/ai/eval/robot_links.html
>
>>Will computers ever achieve human like intelligence? I'm certainly not going to
>>state that they aren't.
>
>I am quite sure that they will exceed it.  In 1000 years, human intelligence
>will look like a spider compared to the computer.

Well, I'm not even going to touch that one.  I have no idea if humans or
computers will even exist in 1000 years, and realistically neither does anyone
else.  I'm more concerned with my lifetime.

>> Quantum computers in particular are _very_ exciting.
>>But 2020 (as his slide states) is far to soon.
>
>The slide says in HUGE LETTERS 2030.  2020 is on the graph about 'monkey' level.

Relax, no need to whip out the caps-lock ;)  The graph only goes up to 2020 so I
rashly assumed that was his intersection.

>>I think even 2030 is too soon.
>>If ever computers surpass humans, they definitely won't be Von Neuman machines.
>
>I think it is unlikely to predict what kind of machines they will or won't be.

We humans are remarkably bad at predicting the future.  Perhaps machines will do
it better :)

anthony



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.