Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Test your program

Author: Jesper Antonsson

Date: 04:56:25 05/05/01

Go up one level in this thread


On May 04, 2001 at 21:51:48, Robert Hyatt wrote:
>If you do the math:  480 chess processors, 1/2 at 20mhz, 1/2 at 24mhz, you
>get an average of 22mhz, which at 10 clocks per node means an average of 2.2M
>nodes per second per processor.  Times 480 and you get 1 billion.  Peak of
>course, but it _could_ reach that peak.  Hsu claimed his search was about 20%
>efficient which would take that to roughly 200M...

Yeah, ok, with "peak" I meant maximum really *attained*, not a "theoretical"
max.

>Still, it would be _very_ fast.  Just not as fast as deep blue by quite a
>ways...  And then there is the evaluation problem.  I _know_ I don't do in my
>eval what they did in theirs as I would probably be another factor of 3-5
>slower if I did...

Another three years. :-) Then it's the tuning of the different eval terms.
Perhaps they did better, perhaps you or others are doing/will do better. Hard to
say with so little data. I remember a Scientific American article from back in
the beginning of the nineties (1991?), though, where the DT team talked about
different automatic algorithms for tuning eval against grandmaster games.
Perhaps they did really well eventually and surpassed current programs, or did
good manual tuning later, but back then, if I remember correctly, they wrote
something about their automatic algorithms "narrowing the gap" to the eval of
hand-tuned software.

>I don't think we will be able to do / use all the 6 man egtbs within 6
>years.  The size of all of those will be mind-boggling.  We are approaching
>100 gigs and have yet to do anything but 3 vs 3 with no pawns...

Perhaps, perhaps not. Hard drive space are still progressing at an impressive
rate, though. In six years, I predict terabyte drives will be going into the low
end price range (about $200). Perhaps a couple of such drives won't suffice, or
perhaps the tables are still going to take to much time to construct, but we
should at least be getting close, no?

>If the _hardware_ scales well, then the SMP algorithm will do fine.  My rough
>estimate for speedup is N-(N-1)*.3 for a rough speedup estimate.  Or to make
>it simpler, .7N where N is the number of processors...
>
>64 nodes should scream...
>
>I will have some data by the Summer I hope...

Sweet. Is that 0.7N "raw" NPS, or is it factoring in efficiency losses, IE the
extra nodes you search that you wouldn't on a sequential machine?

Jesper



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.