Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How to turn a ordinary micro chess program into Deep-blue?

Author: Robert Hyatt

Date: 17:41:57 04/13/00

Go up one level in this thread


On April 13, 2000 at 17:56:22, Pete Galati wrote:

>On April 13, 2000 at 17:32:26, Robert Hyatt wrote:
>
>>On April 13, 2000 at 11:26:19, Pete Galati wrote:
>>
>>>On April 13, 2000 at 09:12:43, Robert Hyatt wrote:
>>>
>>>>On April 13, 2000 at 03:02:09, blass uri wrote:
>>>>
>>>>>On April 12, 2000 at 23:29:54, Robert Hyatt wrote:
>>>>>
>>>>>>On April 12, 2000 at 17:46:51, Dann Corbit wrote:
>>>>>>
>>>>>>>On April 12, 2000 at 17:32:44, Derrick Williams wrote:
>>>>>>>
>>>>>>>>On April 12, 2000 at 16:48:15, Dann Corbit wrote:
>>>>>>>>
>>>>>>>>>On April 12, 2000 at 16:36:09, Derrick Williams wrote:
>>>>>>>>>
>>>>>>>>>>I would like to simulate the expierence of playing against Deepblue. How long
>>>>>>>>>>would I have to let fritz6 think per move on a pent 450 to simulate playing
>>>>>>>>>>deepblue at 40/2 hrs? Should I let fritz6 think one hour per move or what?
>>>>>>>>>
>>>>>>>>>Does fritz6 have a 40 moves / 2000 hrs setting?
>>>>>>>>>That should be about right, as far as NPS.
>>>>>>>>
>>>>>>>>
>>>>>>>>  You are exaggerating just a bit aren't you?
>>>>>>>
>>>>>>>No.
>>>>>>>DB calculates 200M NPS, micros about 200K NPS.  (roughly speaking -- might be
>>>>>>>off by a factor of 2 or so for what fritz 6 can do on a PIII 450, which would
>>>>>>>reduce it to 1000 hours instead of 2000).
>>>>>>>
>>>>>>>DB was one heck of a machine.
>>>>>>
>>>>>>
>>>>>>Yes....  and it could peak at 1B nodes per second, with 200M being the typical
>>>>>>lower bound...   480 chess processors at 2 to 2.4M nodes per second each...
>>>>>
>>>>>480 chess processors at 2 to 2.4M nodes can be the same as 200M with one
>>>>>processor if you consider loss of speed from parallel search.
>>>>>
>>>>>Uri
>>>>
>>>>
>>>>Parallel search doesn't lose speed.  It just searches extra nodes.  But the NPS
>>>>value goes up fairly linearly.  Try crafty on a quad xeon using 1cpu, 2cpus,
>>>>etc.  At 4 cpus the NPS is pretty much 4x.  But roughly 25% of the search space
>>>>is redundant...
>>>>
>>>>I haven't seen anyone adjust the NPS to reflect search efficiency, since it is
>>>>impossible to determine exactly how many nodes are 'extra overhead'...
>>>
>>>Are you talking about in relation to an individual position?  In otherwards,
>>>that the amount of redundant search would change with each position, making it
>>>basically imposible to have an etched in stone formula?
>>>
>>>Pete
>>
>>
>>absolutely.  And not even by position, but by "run".  IE run the same position
>>10 times and you will get 10 different node counts...
>
>I've noticed about benchmark tests that if you run them back to back, they come
>out different.  I've never been able to account for that and I allways thought
>it had more to do with my computer's use of memory than it had to do with the
>program.
>
>Or with programs with no benchmark test, I've tried to measure the nps to decide
>what optimizer in the makefile gives that fastest program, so I'd do something
>completely not in the opening book like 1. a4, and I could do that 4 or 5 times
>in a row and get different results.  So each compile I'd have to do 1. a4
>several times to get an average for which one was faster.
>
>Pete

two things cause different timing in a non-parallel program:

(1) first run pages the program into memory, and part of the paging overhead
gets billed to the user, increasing the CPU time.  Successive runs use cached
pages of the original executable file from memory, which speeds things up a
bit.

(2) the program can get scattered around in memory in non-optimal ways that
make cache performance suffer...  and on other runs it can get a more favorable
'scattering' and run faster because it uses more of cache...

Parallel programming blows all that to hell, of course...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.