Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: What made Deep blue good? What will make programs much better now?

Author: Robert Hyatt

Date: 08:59:40 07/09/02

Go up one level in this thread


On July 09, 2002 at 03:45:52, Christophe Theron wrote:

>On July 08, 2002 at 23:21:06, Robert Hyatt wrote:
>
>>On July 08, 2002 at 14:11:19, Christophe Theron wrote:
>>
>>>On July 08, 2002 at 13:27:15, Robert Hyatt wrote:
>>>
>>>>On July 08, 2002 at 12:48:58, Sune Fischer wrote:
>>>>
>>>>>On July 08, 2002 at 11:34:36, Robert Hyatt wrote:
>>>>>
>>>>>>>I too am a DB fan.  Just like Bob.
>>>>>>>
>>>>>>>But I actually agree with you here.  I don't think DB did anything
>>>>>>>*spectacular*.
>>>>>>
>>>>>>I totally disagree.  Their speed _was_ "spectacular".  And that was _the_
>>>>>>point of Deep Blue, after all.  Not the point everyone _wants_ to be the
>>>>>>point of deep blue, but _the point_ the team developed over 10 years...
>>>>>>
>>>>>
>>>>>Here is a crazy thought, why not simulate DB?
>>>>>Given all the papers, I think it should be possible to modify Craft to use the
>>>>>same eval and extensions. We turn off hashing, nullmove, SEE and whatever DB
>>>>>didn't have. Then we find a slow machine for Tiger and a super fast one for
>>>>>Crafty, so Crafty (in DB-mode) has a 200 nps fold advantage.
>>>>>
>>>>>Ok lot of work, but seems this is the never ending story :)
>>>>>
>>>>>-S.
>>>>
>>>>
>>>>This would be great if we had some of the DB guys helping.  Unfortunately,
>>>>while they revealed a lot about various parts of DB, there is no single
>>>>comprehensive source paper to use as a reference.  IE what are those 8,000
>>>>unique eval terms in DB (some of those terms actually represent a matrix with
>>>>multiple values so it is actually more complex than that)?
>>>
>>>
>>>
>>>Sorry but the "8000" includes every entry of every matrix.
>>
>>Not according to the things I have seen written.  But it really doesn't matter
>>to me either way.  I don't have anywhere _near_ 8000 terms in my evaluation.
>>I don't have 1000 unique terms, even counting all the piece/square tables.
>>
>>
>>
>>>
>>>It's like saying that a piece square table program is composed of 768 unique
>>>eval terms (64 squares x 6 piece types x 2 colors).
>>
>>Even if that were done, that is only 10%.  What about the other 90%?  You
>>have a _lot_ of counting to go to reach 8000...
>
>
>They say in their paper that many terms were not used.
>
>
>
>    Christophe
>

Correct.  They also said that 8000 _were_ used.  Hsu has said that maybe
50% of the total evaluation hardware was actually "turned on" in 1997 due to
time constraints.


>
>
>
>>
>>
>>>
>>>If I count this way, I guess that Chess Tiger must have something like 50000
>>>unique eval terms... :-)
>>>
>>>
>>>
>>>    Christophe
>>>
>>>
>>>
>>>
>>>>  Ditto for some of
>>>>their search algorithms.  They have given lots of 'hints' about things, but
>>>>significant implementation details are not available.
>>>>
>>>>IE something like trying to build a F-1 by looking at it run around the track.
>>>>There are _significant_ details that are not readily apparent from such
>>>>observations...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.