Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Dead Wrong!

Author: Dan Homan

Date: 13:03:22 07/22/00

Go up one level in this thread


On July 22, 2000 at 13:59:04, Ed Schröder wrote:


>
>A couple of points...
>
>1) A company like IBM doesn't put sloppy information on their pages.
>Certainly not when the goal is promotion. What I read over there I
>take for serious. I know the things that are written on my pages are
>at least double checked.

I have information on my personal webpage about quasars and active
galaxies.  The information is very general and I use terms that the
average person can understand (I think). If another specialist were to
read my statements there, they might misunderstand some of the terms I
use and the way I've used them.  They might think I am saying more
than I really am. Of course, if they understood that my audience was
the general public, they would read the page in a different light and
not be so concerned about my more common usage of terms.

I think you have to consider who the audience is.  To the average person,
brute force means *every* position. The IBM page clearly defines it this
way.  So when the webpage says they are being more selective than this, they
could mean anything from a highly selective search to simply the "alpha-beta"
algorithm.  There is no way to know precisely what is meant, because not enough
information is given.  For this reason, I think statements on a webpage should
be considered "bad data".

>
>2) DT/DB did very well until 1995 against slow 386/486/6502 processors.
>
>3) Then 1995 came and they lost. Hsu says 2M NPS, IBM says 7M NPS. How come?

They lost a game.  Anyone can lose a game.  Any number of things can cause it.
It is bound to happen.  I am not sure there is any information to be gained here
from a single loss.  Are they worse than before relative to the mircos? Maybe.
Do they have the same kind of advantage over the micros as before? Maybe.

Hsu says 2M NPS which is 2/7 of 7M NPS.  Later when the program is averaging
700M raw NPS, Hsu claims 200M NPS.  There is clearly a pattern here.  Hsu is
reporting the effective nodes, correcting for the mulitprocessor overhead.  In
both cases the overhead is something like 5/7 of the raw nodes as Bob stated.
I don't see any inconsistency here, except that IBM is a big company - a really
big company.  In big companies (just like governments) information gets reported
inconsistently from time to time.  It happens.

>
>4) In 1996 IBM claims 100M NPS
>
>5) In 1997 IBM claims 200M NPS
>
>We all were excited (me included) and we forgot about Hong Kong 1995

Why forget about it?  It is one loss that could mean anything.  Remember it if
you like, but don't make it out to be more than it is.  They could be extremely
dominant and still loose a game due to a bug or a bad restart of the program
after a communications failure or whatever.  Alternatively, they could be
running up against the limits of what speed can do and the micros really were
catching up.  I don't see any way to tell the difference between these two
options.

 - Dan

>because wow... 200M now that is something! It surely will blow all
>computer competition away and it dominated our minds.
>
>Know what I mean?
>
>Ed
>



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.