Author: Robert Hyatt
Date: 19:51:46 02/02/04
Go up one level in this thread
On February 02, 2004 at 17:16:15, Vincent Diepeveen wrote: >On February 02, 2004 at 11:47:08, Colin Frayn wrote: > >>On February 02, 2004 at 10:30:29, Vincent Diepeveen wrote: >> >>>On February 02, 2004 at 07:29:19, Colin Frayn wrote: >>> >>>Where can i find the logfile from this game showing search depths from >>>chessbrain, so that i can compare at home with a single cpu engine? >> >>We didn't store everything so it's unlikely that this could be found. I don't > >There is a central point where the decision gets made to play the move. Each >move is based upon a certain depth, no matter how the thing searches. > >Trivially it has a horrible speedup, this is not important. Then why are you so interested in it? Didn't you say that after the WCCC _you_ were going to post all your logs and speedup data for everyone to see? Have you posted it? And you ask/bug others to post theirs??? > Important is to have >that logfile. I'm not even asking that you make things like collecting the total >number of nodes searched (without losing system time in diep i collect >statistics in DIEP at a central point this gets logged to the logfile). > >If you can save the total number of nodes searched for a team, you sure can save >for that single game you play against a GM the logfile with the search depth in >it that the central point had. > >How selective it was or wasn't is not important now. Trivially you must be >creative the first few plies or you won't be able to get all nodes to work. In >diep i'm forced at 460 processors to sometimes already split before nullmove >gets made, otherwise i don't get all nodes to work simply. This where it is well >known by everybody that splitting before or during nullmove is horrible for your >speedup. > It isn't known by me. You can split _in_ a null-move search just fine. I do it now, I did it in Cray Blitz. Everyone else I know of also does it with no problems... of course don't let small facts get in the way of big nonsense... >>know exact figures, but I can certainly tell you that during testing we were >>finding the move b6! in WAC100 in well under a minute with a few hundred >>PeerNodes whereas standalone Beowulf on my machine couldn't find it within 10 >>minutes at the time. Part of the benefit of much more memory being thrown at >>the problem, even if it was not linked together. >> >>>I am very interested knowing in how much of a speedup efficiency you get out of >>>the thing. >> >>At the moment it's hideously inefficient - I noticed that when (for the first >>time) I saw the thing running the night before the match! At some points we >>were wasting almost a minute each move (that I now know about, and we can fix) >> >>>When i ran a simulation with diep distributed at the supercomputer at 460 >>>processors, the speedup was not so good. >> >>It's certainly not a huge speedup at the moment, but we've got a lot of possible >>avenues for improvement, that's for sure. All I know is that Beo is a 2400 >>engine at best, probably worse, and we got a better performance than that, at >>least after the first few out-of-book moves (which weren't very strong). >> >>Search depth is also complicated because if, e.g., the server sends out a node >>at ply 2 and it searches for 8 ply depth, this isn't the same as searching the >>root position to 10 ply because we're being much more creative with the depth of >>search and pruning etc, so the exact search depth is quite variable along the >>first ply. >> >>>Good luck in your effort finding sponsors. >> >>Thanks. Hopefully we won't need luck any more.... >> >>Cheers, >>Col
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.