Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Dead Wrong!

Author: Ed Schröder

Date: 10:53:52 07/22/00

Go up one level in this thread


On July 22, 2000 at 10:08:15, Robert Hyatt wrote:

>On July 22, 2000 at 05:44:36, Ed Schröder wrote:
>
>>On July 21, 2000 at 22:27:45, Robert Hyatt wrote:
>>
>>>On July 21, 2000 at 19:16:41, Ed Schröder wrote:
>>>
>>>>On July 21, 2000 at 15:29:26, Robert Hyatt wrote:
>>>>
>>>>If you don't mind I only answer those points not earlier discussed
>>>>(enough) to avoid ending up in endless circles.
>>>>
>>>>
>>>>>>2) DB is no brute force program (as you always have claimed). Quote
>>>>>>from the IBM site:
>>>>>>
>>>>>>    "Instead of attempting to conduct an exhaustive "brute force"
>>>>>>    search into every possible position, Deep Blue selectively
>>>>>>    chooses distinct paths to follow, eliminating irrelevant searches
>>>>>>    in the process."
>>>>>>
>>>>>>I always said this after I had seen the log-files. It beats me how you
>>>>>>always have claimed the opposite on such a crucial matter presenting
>>>>>>yourself as the spokesman of Hsu even saying things on behalf of Hsu
>>>>>>and now being wrong on this crucial matter?
>>>>>
>>>>>Sorry, but you are wrong and are interpreting that wrong.  DB uses _no_
>>>>>forward pruning of any kind, this _direct_ from the DB team.  The above is
>>>>>referring to their search _extensions_ that probe many lines way more deeply
>>>>>than others.  If you want to call extensions a form of selective search, that
>>>>>is ok.  It doesn't meet the definition used in AI literature of course, where
>>>>>it means taking a list of moves and discarding some without searching them at
>>>>>all.
>>>>
>>>>The quoted text describes DB as a selective program, no brute force. I
>>>>don't see how you can explain it otherwise. The text is crystal clear.
>>>>
>>>>
>>>
>>>Why don't you simplyh ask Hsu, or are you afraid you will get an answer
>>>you don't want?  DB was _always_ brute force.  Every document written about
>>>DB said this.  The paragraph you are quoting is talking about "selective
>>>search extensions" which was one of the real innovations from the Deep Thought
>>>development (singular extensions, later used by Lang, Kittinger, Moreland,
>>>Hyatt, who knows who else).
>>
>>I disagree. Extensions are always selective. Some moves are extended
>>some don't and that makes that extensions is a selective process by nature.
>>So the text (about brute force) can't be related to the previous sentence
>>(about extensions). They made 2 statements (not one).
>>
>
>There is a difference.  It is one thing to search move A one ply deeper than
>move B, based on some (hopefully good) criteria.  It is quite difference to
>simply choose to take move B and not search it at all from the current
>position.  One easy example is that in the q-search, I throw out _all_
>non-captures and consider them no further, while captures continue to grow
>trees below them...

That I call pruning :)



>>
>>>You _know_ they were basically in the same mold as the rest of us.  This has
>>>_never_ been in doubt.
>>>
>>>If you do doubt it, just ask the horse's mouth, since you don't want to believe
>>>me.
>>>
>>>
>>>
>>>>
>>>>>This _was_ deep thought.  It was doing about 2M nodes per second in 1995,
>>>>>according to Hsu.
>>>>
>>>>Then Hsu is wrong or the IBM site.
>>>>
>>>>Quote from the IBM site:
>>>>
>>>>    "Deep Thought acquires 18
>>>>     additional customized chess
>>>>     processors and emerges as
>>>>     Deep Thought II. It now is
>>>>     running on an IBM/6000 and
>>>>     can search six to seven million
>>>>     chess positions per second.
>>>
>>>
>>>That was correct.  But as I said (after a conversation with Hsu) it _never_
>>>really ran at that speed.  The few times they tried to use all the hardware,
>>>things didn't work out very well (this was mainly used during the Fredkin
>>>stage II matches, where they physically shipped the machine (a single Sun
>>>workstation + the VME cards) to remote locations.
>>>
>>>Hsu has said point blank, the most recent version of DT was searching about
>>>2M nodes per second.  I take him at his word, since he built the thing...
>>
>>The only thing that counts here is the contradicting data:
>>
>>1991: IBM 7 million
>>1995: Hsu 2 million
>>
>>Now who to believe that's the question.
>>
>>
>
>
>Simply ask Hsu.  wouldn't you think???  I used the 7M speed in a post
>once (either here or in r.g.c.c) and he sent me a private email correcting
>the number.  I believe 7M was the peak number, while 2M was the _effective_
>number and matched the 200M number from DB.  As I said before,  everybody
>used to report MAX or TYPICAL NPS, but the number was RAW.  IE for Cray Blitz,
>my 8 cpu numbers were 8x my one cpu numbers.  Hsu changed the way he reported
>this so that the numbers were more realistic.  IE with CB you might conclude
>that it would search to the same depth in 1/8th the time, since the RAW NPS was
>8X the one CPU number.  That didn't happen.  With Hsu's numbers, if a single
>chip went 2M, he says that 480 CHIP DB won't search the same tree in 1/480th
>of the time.  Rather it will search it in 1/100th the time (2M / 200M, rather
>than 2M / 1B.


>If you want to use 7M for DT, then lets use 1B for DB2, as that is comparable.

You made my day.... :)

Good moment to end the discussion.

Ed



>If you want to use 200M for DB, then 2M for DT is the right number.  All right
>from the "horse's mouth" if you know what I mean...
>
>I don't believe he started using the effective number until his PhD
>dissertation, which was well after DT.02 was out and running...
>
>
>
>
>
>
>
>>
>>>>
>>>>6 to 7 million NPS. This in the year 1991 so 4 years before the Hong Kong
>>>>event. So according to Hsu and/or IBM in 1995 the machine dropped from 7 to
>>>>2 million NPS?? One might expect the opposite, a faster machine after
>>>>4 years but not a slower one. Something ain't right with these numbers.
>>>
>>>
>>>Simply email Hsu...  it was his box.  He can tell you what you want to
>>>know...
>>>
>>>
>>>>
>>>>
>>>>>Fine.  Again, Hsu is a liar.  If that is what you want to think.  Here is
>>>>>an excerpt from him that might help:
>>>>>
>>>>>===============================================================================
>>>>>Web-based DB Jr uses a single card, a random opening book (including
>>>>>fairly bad lines) and one second per move (a quarter of which is used
>>>>>in downloading the evaluation function, and the search extensions are
>>>>>more or less off due to the very short time).  It probably plays at around
>>>>>2200, which is usually sufficient to play against players in random marketing
>>>>>events.  Repetition detection is also turned off (The web-based program
>>>>>is stateless).  The playing strength of "DB Jr." spans a quite wide range,
>>>>>depending on the setup.  The top level, which we used for analysis and
>>>>>in-house training against Grandmasters, is likely in the top 10 of the
>>>>>world.
>>>>>================================================================================
>>>>
>>>>I said the contradiction is in the private emails so you can't know.
>>>>
>>>>Ed
>>>
>>>
>>>No, but I believe from the above, which is also private email, there is
>>>absolutely no confusion in what "web DB Jr" was.  It is _very_ clear, and
>>>not open to misinterpretation, wouldn't you say??
>>>
>>>It was thrown together at the request of marketing guys. And "thrown together"
>>>is a pretty accurate description.  He says "2200".  In another email he said
>>>"2200 might have been optimistic"...
>>
>>Every time it is something else. I stopped believing it.
>>
>>Ed
>
>
>That is your choice of course.  I know him a bit better...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.