Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Test your program

Author: Robert Hyatt

Date: 07:51:00 05/05/01

Go up one level in this thread


On May 05, 2001 at 10:46:54, Robert Hyatt wrote:

>On May 05, 2001 at 07:56:25, Jesper Antonsson wrote:
>
>>On May 04, 2001 at 21:51:48, Robert Hyatt wrote:
>>>If you do the math:  480 chess processors, 1/2 at 20mhz, 1/2 at 24mhz, you
>>>get an average of 22mhz, which at 10 clocks per node means an average of 2.2M
>>>nodes per second per processor.  Times 480 and you get 1 billion.  Peak of
>>>course, but it _could_ reach that peak.  Hsu claimed his search was about 20%
>>>efficient which would take that to roughly 200M...
>>
>>Yeah, ok, with "peak" I meant maximum really *attained*, not a "theoretical"
>>max.
>
>
>DB actually hit the theorectical peak frequently.  It just couldn't sustain
>it for a long period of time.  There is a balance point between how fast the
>software can produce positions and how fast the hardware can search them.
>
>Hsu reported that he could drive the chips at roughly 70% of their duty cycle
>averaged over a game, which would be about 700M nodes per second average.
>However, he also factored in the lost efficiency due to search overhead that we
>all have to deal with, which is where the 200M estimate comes from.  But to
>compare to crafty, I simply report NPS for a move as total nodes searched
>divided by seconds used.  For DB the comparable number would average 700M.
>
>
>
>>
>>>Still, it would be _very_ fast.  Just not as fast as deep blue by quite a
>>>ways...  And then there is the evaluation problem.  I _know_ I don't do in my
>>>eval what they did in theirs as I would probably be another factor of 3-5
>>>slower if I did...
>>
>>Another three years. :-) Then it's the tuning of the different eval terms.
>>Perhaps they did better, perhaps you or others are doing/will do better. Hard to
>>say with so little data. I remember a Scientific American article from back in
>>the beginning of the nineties (1991?), though, where the DT team talked about
>>different automatic algorithms for tuning eval against grandmaster games.
>>Perhaps they did really well eventually and surpassed current programs, or did
>>good manual tuning later, but back then, if I remember correctly, they wrote
>>something about their automatic algorithms "narrowing the gap" to the eval of
>>hand-tuned software.
>>
>>>I don't think we will be able to do / use all the 6 man egtbs within 6
>>>years.  The size of all of those will be mind-boggling.  We are approaching
>>>100 gigs and have yet to do anything but 3 vs 3 with no pawns...
>>
>>Perhaps, perhaps not. Hard drive space are still progressing at an impressive
>>rate, though. In six years, I predict terabyte drives will be going into the low
>>end price range (about $200). Perhaps a couple of such drives won't suffice, or
>>perhaps the tables are still going to take to much time to construct, but we
>>should at least be getting close, no?
>>
>>>If the _hardware_ scales well, then the SMP algorithm will do fine.  My rough
>>>estimate for speedup is N-(N-1)*.3 for a rough speedup estimate.  Or to make
>>>it simpler, .7N where N is the number of processors...
>>>
>>>64 nodes should scream...
>>>
>>>I will have some data by the Summer I hope...
>>
>>Sweet. Is that 0.7N "raw" NPS, or is it factoring in efficiency losses, IE the
>>extra nodes you search that you wouldn't on a sequential machine?
>>
>>Jesper


I missed that last question.  For crafty, "raw NPS" is simply N times faster
than one processor.  The .7 factors out the roughly 30% extra work each
processor does in a parallel search.  IE it would be pretty equivalent to the
numbers Hsu reports when he says 200M.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.