Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: next deep blue

Author: blass uri

Date: 01:06:58 01/24/00

Go up one level in this thread


On January 23, 2000 at 22:56:04, Christophe Theron wrote:

>On January 23, 2000 at 03:35:35, Bruce Moreland wrote:
>
>>On January 23, 2000 at 02:51:55, Amir Ban wrote:
>>
>>>The results can be disregarded on these grounds of course, but it's also true
>>>that the results, as reported, can be dismissed as being in contradiction to the
>>>DB/DT public record, and to common sense in general.
>>
>>Here are some ideas about what might have happened in those games:
>>
>>1) DB Jr may have beaten those programs purely through eval function
>>superiority.
>>
>>2) It may have won because of superior search.
>>
>>3) There may have been a poor comparison between node rates, resulting in DB Jr
>>having a massive hardware advantage.
>>
>>4) The whole thing may be ficticious.
>>
>>5) Random chance.
>>
>>6) Something I haven't thought of yet.
>>
>>Bob may go nuts because I included #4.  I don't believe that #4 is true, but
>>someone can always claim that it is, and there is no obvious evidence that can
>>be used to refute this claim, which disadvantages us who want to understand this
>>rather than argue religion and conspiracies all day.
>>
>>#1 is what we are expected to believe, I thought that is what this test was
>>supposed to measure.  I have a very hard time with this one.  I don't believe
>>there are any terms that in and of themselves would result in such a lopsided
>>match.  I don't believe that I could set up my program to search exactly a
>>hundred million nodes per search, and play it against the best eval function I
>>could possibly write, also searching a hundred million nodes per search, and
>>score 38-2.
>
>
>I totally agree with you here.
>
>
>
>>Could I be convinced that #1 is true?  You bet!  Will I accept that #1 is true
>>based upon faith in the reputations of Hsu and Campbell?  With all due respect,
>>not a chance.  I don't think anyone should be expected to be so trusting in a
>>field that's even remotely scientific.
>>
>>It would also be hard to accept #2, since DB is supposedly not optimized for
>>short searches.  And I believe that you've ruled out #5, which seems a sensible
>>thing to do.
>
>
>I haven't followed the discussion, but on short searches, how many plies deeper
>do you need to compute to get a 38-2 result?
>
>My guess is that 3 plies and a bit of luck would do it easily. You need to be
>100 to 200 times faster than your opponent to achieve this (less if you have a
>decent branching factor, but DB has not).
>
>I think this is a very easy experiment to do.
>
>DB is definitely optimized for short searches, if you think about it.
>
>It has the best NPS of all times, and probably one of the worse branching factor
>you can imagine, because of this crazy singular extension and lack of null move
>(or related) optimization.
>
>So I would say that compared to modern microcomputer programs it would perform
>worse and worse as time control increases.
>
>
>Maybe I missed something?

I am not sure tat it would perform worse because one ply of deeper blue is not
the same as one ply of most of the commercial programs because of extensions.


I can give an example from the commercial programs.

Chessmaster6000 has a big branching factor but it does not perform worse at long
time control.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.