Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: ChessBrain Result

Author: Vincent Diepeveen

Date: 02:34:59 02/03/04

Go up one level in this thread


On February 02, 2004 at 22:51:46, Robert Hyatt wrote:

>On February 02, 2004 at 17:16:15, Vincent Diepeveen wrote:
>
>>On February 02, 2004 at 11:47:08, Colin Frayn wrote:
>>
>>>On February 02, 2004 at 10:30:29, Vincent Diepeveen wrote:
>>>
>>>>On February 02, 2004 at 07:29:19, Colin Frayn wrote:
>>>>
>>>>Where can i find the logfile from this game showing search depths from
>>>>chessbrain, so that i can compare at home with a single cpu engine?
>>>
>>>We didn't store everything so it's unlikely that this could be found.  I don't
>>
>>There is a central point where the decision gets made to play the move. Each
>>move is based upon a certain depth, no matter how the thing searches.
>>
>>Trivially it has a horrible speedup, this is not important.
>
>Then why are you so interested in it?  Didn't you say that after the WCCC

They claim to have worlds largest chesscomputer, let them proof their point that
they also managed to get to work some processors, because i will be amazed if
their speedup is much above 1.0 :)

>_you_ were going to post all your logs and speedup data for everyone to see?
>Have you posted it?  And you ask/bug others to post theirs???

Everyone who emailed me, i emailed all the logfiles. in fact some runs i even
posted onto CCC :)

If you want to put my logfiles at your ftp site, no problems. I don't have a
homepage currently, that's all.

Note that during the games i also was kibitzing mainlines, i'm sure you saw that
yourself too being a regular observer :)

>> Important is to have
>>that logfile. I'm not even asking that you make things like collecting the total
>>number of nodes searched (without losing system time in diep i collect
>>statistics in DIEP at a central point this gets logged to the logfile).
>>
>>If you can save the total number of nodes searched for a team, you sure can save
>>for that single game you play against a GM the logfile with the search depth in
>>it that the central point had.
>>
>>How selective it was or wasn't is not important now. Trivially you must be
>>creative the first few plies or you won't be able to get all nodes to work. In
>>diep i'm forced at 460 processors to sometimes already split before nullmove
>>gets made, otherwise i don't get all nodes to work simply. This where it is well
>>known by everybody that splitting before or during nullmove is horrible for your
>>speedup.
>>
>
>It isn't known by me.  You can split _in_ a null-move search just fine.  I do it
>now, I did it in Cray Blitz.  Everyone else I know of also does it with no
>problems...
>of course don't let small facts get in the way of big nonsense...

Cray Blitz had just 16 processors at a shared memory machine with fast
interconnects and very slow processors, just 100Mhz or so.

That's beginners stuff to get to work compared to > 100 cpu's.

Also your thing was fullwidth. Non-recursive nullmove R=1 will reduce the entire
search at most 1 ply, that's not counting as a real nullmove search :)

You were for 99.99999% fullwidth minus at most 1 ply :)

Your search depths of course proof that. By the way where are the logfiles from
these games?

Problems splitting really start getting major league above 64 cpu's. Thanks to
nullmove the trees are very instable, fullwidth they aren't. A search that's
fullwidth is not so easy to split at 500 processors, but with nullmove you get
real nightmare problems because you continuesly must abort very remote
processors and different subsets. A very tiny search tree can abort some huge
search tree. Fullwidth this happens less.

Additionally a distributed project has dying processor problems and different
latencies to different nodes, which make getting a speedup a lot harder.

I'm sure Colin Frayn & co know more there about the problems at such numbers of
cpu's than you'll ever know :)

Everyone reports at 2 cpu's already major speedup problems when splitting before
nullmove.

So if you claim now that you have never had problems here i demand you to
implement it in crafty and play with that next amateur tourney :)

>
>
>
>>>know exact figures, but I can certainly tell you that during testing we were
>>>finding the move b6! in WAC100 in well under a minute with a few hundred
>>>PeerNodes whereas standalone Beowulf on my machine couldn't find it within 10
>>>minutes at the time.  Part of the benefit of much more memory being thrown at
>>>the problem, even if it was not linked together.
>>>
>>>>I am very interested knowing in how much of a speedup efficiency you get out of
>>>>the thing.
>>>
>>>At the moment it's hideously inefficient - I noticed that when (for the first
>>>time) I saw the thing running the night before the match!  At some points we
>>>were wasting almost a minute each move (that I now know about, and we can fix)
>>>
>>>>When i ran a simulation with diep distributed at the supercomputer at 460
>>>>processors, the speedup was not so good.
>>>
>>>It's certainly not a huge speedup at the moment, but we've got a lot of possible
>>>avenues for improvement, that's for sure.  All I know is that Beo is a 2400
>>>engine at best, probably worse, and we got a better performance than that, at
>>>least after the first few out-of-book moves (which weren't very strong).
>>>
>>>Search depth is also complicated because if, e.g., the server sends out a node
>>>at ply 2 and it searches for 8 ply depth, this isn't the same as searching the
>>>root position to 10 ply because we're being much more creative with the depth of
>>>search and pruning etc, so the exact search depth is quite variable along the
>>>first ply.
>>>
>>>>Good luck in your effort finding sponsors.
>>>
>>>Thanks.  Hopefully we won't need luck any more....
>>>
>>>Cheers,
>>>Col



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.