Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: blass uri

Date: 02:40:11 01/22/00

Go up one level in this thread


On January 21, 2000 at 22:54:36, Robert Hyatt wrote:

>On January 21, 2000 at 17:22:08, Amir Ban wrote:
>
>>On January 21, 2000 at 15:08:16, Robert Hyatt wrote:
>>
>>>On January 21, 2000 at 13:56:40, Tom Kerrigan wrote:
>>>
>>>>On January 21, 2000 at 11:44:22, Robert Hyatt wrote:
>>>>
>>>>>It would run so much slower it would get killed tactically.  Remember that their
>>>>>king safety included not just pawns around the king, but which pieces are
>>>>>attacking what squares, from long range as well as close range.  Which pieces
>>>>>are attacking squares close to the king, etc.  That takes a good bit of
>>>>>computing to discover.
>>>>
>>>>I realize that it takes a good bit of computing to discover. But I doubt it
>>>>takes so much that it's prohibitive. There are very successful micro programs
>>>>with extremely expensive evaluation functions, e.g., MChess and the King, and to
>>>>a lesser extent, HIARCS and Zarkov. These programs all reportedly have terms
>>>>similar to the ones you describe. I seriously doubt that the DB evaluation
>>>>function is an order of magnitude more complex than, say, MChess's...
>>>>
>>>>-Tom
>>>
>>
>>Add Junior to the above list.
>>
>>
>>>
>>>But they don't take the time to find out which pieces are attacking squares
>>>around the king "through" another piece.  IE a bishop at b2 attacking g7, but
>>>only if the Nc3 moves.  Or only if the pawn on d4 or e5 moves.  That gets very
>>>expensive computationally.  DB gets it for nothing.  I think it would slow me
>>>down by a factor of 100 or more, depending on how far I wanted to take it...
>>>
>>>That might make me more aware of king attacks, but it would hide many plies
>>>worth of tactics since a factor of 100 is over 4 plies.  Only a wild guess
>>>of course on the factor of 100, but since the eval is done at every node in
>>>the q-search, this is probably within an order of magnitude or two of the
>>>real answer.
>>>
>>>I can guarantee you it is more complex than the above evaluations.  And I don't
>>>even know all the things they evaluate.  One new idea mentioned in Hsu's book
>>>was the concept of "a file that can potentially become open" so that you put
>>>rooks on that file, even though you can't see exactly how you are going to open
>>>it within the 15 plies + extensions they were searching.  "Potentially open"
>>>takes a lot of analysis on the static pawn structure.  I do some of this
>>>pawn structure analysis myself, and even with pawn hashing it slowed me down
>>>significantly when I added it a year+ ago to better handle/detect blocked
>>>positions.
>>>
>>>Remember that they claimed about 8,000 static evaluation weights in their
>>>code, this reported by someone that went to a DB talk by Murray Campbell.
>>>8000 sounds like a big number...
>>
>>It's big, but what does it really mean ? Some of it must have been piece-square
>>tables for some features that were downloaded from the hosts, and that's
>>hundreds of entries per feature.
>>
>>Besides, where is all this sophistication showing up in the DB & DBjr games ?
>>Forget the numbers, whatever they mean. Show us the positions & moves.
>>
>>Amir
>
>
>It would seem that the _results_ would speak for themselves.  Who else has
>produced results like theirs?

The question if their good results  was because of deeper search or because of a
better evaluation function.

You cannot get answer for this only by the results.

Uri







This page took 0.05 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.