Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: Robert Hyatt

Date: 17:05:08 01/22/00

Go up one level in this thread


On January 22, 2000 at 18:45:28, Amir Ban wrote:

>On January 22, 2000 at 17:47:28, Robert Hyatt wrote:
>
>>On January 22, 2000 at 10:51:12, blass uri wrote:
>>
>>>On January 22, 2000 at 10:30:25, Robert Hyatt wrote:
>>>
>>>>On January 22, 2000 at 05:40:11, blass uri wrote:
>>>>
>>>>>On January 21, 2000 at 22:54:36, Robert Hyatt wrote:
>>>>>
>>>>>>On January 21, 2000 at 17:22:08, Amir Ban wrote:
>>>>>>
>>>>>>>On January 21, 2000 at 15:08:16, Robert Hyatt wrote:
>>>>>>>
>>>>>>>>On January 21, 2000 at 13:56:40, Tom Kerrigan wrote:
>>>>>>>>
>>>>>>>>>On January 21, 2000 at 11:44:22, Robert Hyatt wrote:
>>>>>>>>>
>>>>>>>>>>It would run so much slower it would get killed tactically.  Remember that their
>>>>>>>>>>king safety included not just pawns around the king, but which pieces are
>>>>>>>>>>attacking what squares, from long range as well as close range.  Which pieces
>>>>>>>>>>are attacking squares close to the king, etc.  That takes a good bit of
>>>>>>>>>>computing to discover.
>>>>>>>>>
>>>>>>>>>I realize that it takes a good bit of computing to discover. But I doubt it
>>>>>>>>>takes so much that it's prohibitive. There are very successful micro programs
>>>>>>>>>with extremely expensive evaluation functions, e.g., MChess and the King, and to
>>>>>>>>>a lesser extent, HIARCS and Zarkov. These programs all reportedly have terms
>>>>>>>>>similar to the ones you describe. I seriously doubt that the DB evaluation
>>>>>>>>>function is an order of magnitude more complex than, say, MChess's...
>>>>>>>>>
>>>>>>>>>-Tom
>>>>>>>>
>>>>>>>
>>>>>>>Add Junior to the above list.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>But they don't take the time to find out which pieces are attacking squares
>>>>>>>>around the king "through" another piece.  IE a bishop at b2 attacking g7, but
>>>>>>>>only if the Nc3 moves.  Or only if the pawn on d4 or e5 moves.  That gets very
>>>>>>>>expensive computationally.  DB gets it for nothing.  I think it would slow me
>>>>>>>>down by a factor of 100 or more, depending on how far I wanted to take it...
>>>>>>>>
>>>>>>>>That might make me more aware of king attacks, but it would hide many plies
>>>>>>>>worth of tactics since a factor of 100 is over 4 plies.  Only a wild guess
>>>>>>>>of course on the factor of 100, but since the eval is done at every node in
>>>>>>>>the q-search, this is probably within an order of magnitude or two of the
>>>>>>>>real answer.
>>>>>>>>
>>>>>>>>I can guarantee you it is more complex than the above evaluations.  And I don't
>>>>>>>>even know all the things they evaluate.  One new idea mentioned in Hsu's book
>>>>>>>>was the concept of "a file that can potentially become open" so that you put
>>>>>>>>rooks on that file, even though you can't see exactly how you are going to open
>>>>>>>>it within the 15 plies + extensions they were searching.  "Potentially open"
>>>>>>>>takes a lot of analysis on the static pawn structure.  I do some of this
>>>>>>>>pawn structure analysis myself, and even with pawn hashing it slowed me down
>>>>>>>>significantly when I added it a year+ ago to better handle/detect blocked
>>>>>>>>positions.
>>>>>>>>
>>>>>>>>Remember that they claimed about 8,000 static evaluation weights in their
>>>>>>>>code, this reported by someone that went to a DB talk by Murray Campbell.
>>>>>>>>8000 sounds like a big number...
>>>>>>>
>>>>>>>It's big, but what does it really mean ? Some of it must have been piece-square
>>>>>>>tables for some features that were downloaded from the hosts, and that's
>>>>>>>hundreds of entries per feature.
>>>>>>>
>>>>>>>Besides, where is all this sophistication showing up in the DB & DBjr games ?
>>>>>>>Forget the numbers, whatever they mean. Show us the positions & moves.
>>>>>>>
>>>>>>>Amir
>>>>>>
>>>>>>
>>>>>>It would seem that the _results_ would speak for themselves.  Who else has
>>>>>>produced results like theirs?
>>>>>
>>>>>The question if their good results  was because of deeper search or because of a
>>>>>better evaluation function.
>>>>>
>>>>>You cannot get answer for this only by the results.
>>>>>
>>>>>Uri
>>>>
>>>>
>>>>No, but if you take the 40 games Hsu/Campbell played against micros in their
>>>>lab, with a very slow single-processor version of DB, you might conclude that
>>>>speed wasn't all they had.  IE 38-2 was the reported result that several
>>>>reported here after attending talks by the two.  That is evidence that they
>>>>are doing something quite good...
>>>
>>>I understood that they have a hardware advantage even in these games and it is
>>>also possible that they did something good in the search and not in the
>>>evaluation.
>>>
>>>Uri
>>
>>
>>They were searching about 100K nodes per second, according to Hsu.  yes, they
>>do a lot more in their eval than what they could do if they ran on a PC and
>>searched 100K nodes per second (this match was in 1995-96 time frame, which
>>would have been roughly pentium pro 200 level machines for the PC side).  But
>>regardless... if they searched only 100K, and they dominated the commercial
>>programs as they have been reporting (again, 38-2 was reported) that says that
>>whatever they are doing is pretty good...  as programs like fritz are way over
>>100K on a P6/200...
>
>Ok, let's see:
>
>If they got that on 100K NPS against micros on say P5-133 Mhz hardware, they had
>only about a 2-to-1 speed advantage vs. Genius or Rebel, and none vs. Fritz.
>They were probably outsearched because their search had no forward pruning.
>Nevertheless they won by huge margin.
>
>The result of 38-2 indicates an advantage of 400-600 points with 95% confidence.
>This means that to achieve only parity in results they would need to cut down
>their speed by a further factor of 1000, to reach about 100 nps, and probably
>only a 2-ply search. You mean their evaluation was so good that they could be
>equal to Rebel & Genius on a 2-ply search ?
>

I didn't say that at all. I said that at 100K nodes per second they played
several micro programs and finished with a 38-2 result.  That's _all_ I said
because that is _all_ I know.  I didn't report the 38-2 result.. that was
someone else here that went to a Hsu or Campbell talk.

The only conclusion _I_ draw is that at equal NPS, they are much stronger.  I
have _no_ clue about "equal hardware" since that would be impossible to
compute...



>This is fairly unbelievable, but even if we take it at face value, how to
>explain that in Hong-Kong, on hardware 50 times faster they managed to score
>only half a point in 2 games against the micros (Fritz & WChess) ?
>
>Amir

They weren't using DB hardware for one thing.  They were using DT hardware which
was known to have a pretty primitive evaluation. This was documented in several
papers by Hsu/Campbell...  His original premise was that a primitive eval with
super-search-speed would be enough to beat Kasparov.  He later decided that
this was not going to happen.  He refined the chip to DB-1 with much more
eval stuff in it.  He then decided that _that_ wasn't enough either.  And
designed the DB-2 chips.  The DB-2 chip was the one used in this "micro
match".  It was a _long_ way from the deep thought machine that lost to Fritz
in Hong Kong...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.