Author: Robert Hyatt
Date: 08:37:44 05/03/04
Go up one level in this thread
On May 03, 2004 at 11:13:11, Vincent Diepeveen wrote: >On May 03, 2004 at 10:53:19, Robert Hyatt wrote: > >>On May 03, 2004 at 10:19:46, Sune Fischer wrote: >> >>> >>>>I don't see any at 8. I don't personally have access to a 16-way box yet so I >>>>can't say anything there. But there is nothing that really makes null-move hurt >>>>parallel search... >>> >>>Couldn't it be, that nullmove hurts scalability in much the same way alpha-beta >>>"hurts" parallel search compared to minimax? >> >>I don't see how. It might make _some_ positions more unstable, and unstable >>positions hurt parallel search. But it also makes other positions more stable. > >What a nonsense. "i don't see how." > >it is very clearly proven. For everyone doing parallel research it is *trivial* >that more unstable trees are harder to split. Note that in old issues of ICGA it >is already mentionned for example by Jonathan Schaeffer. So? Do you claim that null-move makes the search more "unstable"? I hope you don't as you are wrong if so... I don't see where I claimed that unstable trees are no more problematic than stable trees. In fact I have repeatedly mentioned this problem. But then you somehow want to drag null-move into the equation with no supporting data other than the data I provided when this first came up. If you think a speedup of 3.1 is that much better than the 3.0 I got or the 2.8 GCP got, more power to you. It is even probable that a different test set will change these numbers either direction. > >Further you must wait *longer* now to split because you first nullmove and must >search another move before splitting. No. You just split somewhere else. At least that is how _I_ do this. You may wait longer. I don't. Because my processors are not constrained to work together in any way. When one goes idle, it is highly probable that a split is done almost instantly. The idle time measurement supports this clearly. I measure "idle time" in Crafty. On a 4-way box as used in CCT6 this "idle time" averaged about 5% of one CPU. That means that out of 400% total CPU time available, 395% of the CPU time was actually spent searching, 5% of the time was spent idling with no work to do. Note that this is over a whole game. But not counting EGTB losses in the endgames when the CPU percent goes _way_ down due to I/O not SMP issues. A great many positions had an idle time of 1-2% out of 400%. Some were as low as 390%. The average is probably closer to 396 or 397. Better try again... That excuse won't fly... > >All issues contributing to the hard fact that getting a good speedup with >nullmove is a lot harder than when searching fullwidth. > >But keep denying it. I will unless you are claiming that 3.0x vs 3.1x is "a lot harder to get." Of course it seems that you _are_ arguing that point for whatever reason. But it definitely is not "a lot harder". > >Just like a few years ago you claimed fullwidth to be better than searching with >nullmove in chess. Not on your life. I said a 10 ply non-null search is more accurate than a search with null. For example a 10 ply no-null search vs a 11 ply null-search. All in the context of the Deep Blue argument... Where there we were talking about 12 ply non-null vs 12-14 ply null-move on micros... That's easy to see... Crafty has had null-move since 1995. Cray Blitz since the late 80's. So try again... > >>> >>>I wouldn't be surprised if a more selective program was harder to parallelize in >>>general. > >Nullmove gets done *before* you can apply YBW, which has been proven by nearly >everyone to be the best way to split the tree. So nullmove has the earlier >mentionned effects. Baloney. At ply=N I do a null-move. Therefore at ply N+1 I don't and I don't have that problem there... And since I have no real limits on where I can split within the tree, this simply is not a problem. Otherwise my "idle time" would be significantly higher. It isn't. It is much better to actually _measure_ something as opposed to using hand-waving. Anybody can test this in Crafty by just looking at the cpu % it reports... > >It's hard to compare nullmove in that respect with other forms of forward >pruning which happen while you already parallel search. > >>I don't see why, unless you form the hypothesis of "forward pruning makes move >>ordering _worse_." That's the only wat this could happen... >> >>There are obviously "issues". Forward pruning tosses moves out. So at any node >>you will have fewer branches to search than in a normal (non-pruning) program. >>But if you don't require that all processors always work at the same node, this >>should not be a problem. IE Crafty searches endgames just as efficiently as it >>searches complex middlegames, from an SMP perspective... > >In endgames you search in general bigger depths. So on average the trees that >are there after you split are bigger. Bigger depthleft means less overhead and >more efficient parallel search. > >You wrote that down even in your own DTS article. Yes I did. And I have written it several times since. Only you seem to have problems grasping the concept it seems... I was talking about endgames having a narrower branching factor. That doesn't seem to hurt my SMP stuff at all. Middlegames have wider branching factors. That doesn't hurt either. Deeper depth always appears to help, all other things being equal. > >It seems the real selectivity here is in your memory! Or a real inability to understand in yours??? > >> >> >>> >>>Perhaps at some point the benefit of nullmove is even lost to parallel >>>inefficiency and the disadvantages of nullmove begins to outweigh the savings? >>>That might make an interesting experiment :) >>> >>>Of course I don't know anything about it, only guessing. >>> >>>-S.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.