Author: Robert Hyatt
Date: 11:50:53 10/05/03
Go up one level in this thread
On October 05, 2003 at 05:17:40, Sune Fischer wrote: >On October 04, 2003 at 23:57:37, Robert Hyatt wrote: > >>He has _clearly_ said (many times) that it rises beyond N where N is the >>number of processors. Since the entire question of parallel search depends >>on N, and if N doesn't serve as a bound, I can't see what would... > >Hmm, that wouldn't make any sense to me. > >I thought his theory was that because of the very big hash table he is able to >almost keep the entire tree in there, and that this would somehow lower the >branch factor ply for ply. If that is true, we have the _same_ problem. IE eventually we solve the game on today's hardware... > >>> >>>>Vincent's "equation" has no known limit yet, because the obvious limit >>>>for N processors is <= N, but he has (for years) claimed efficiency > N >>>>for N processors. Since N is the controlling term in any equation based >>>>on N, and since N is not a limit, the speedup (to me) appears to be >>>>unbounded, plain and simple. Any speedup whatsoever that is >N means it >>>>must be unbounded. >>>>And we already have the >N claim in writing, many times. >>>> >>>>(not from me or anybody that knows what they are doing, however). >>> >>>Anyone who understands the basics of alpha-beta knows it's absurd. >>>The funny thing is it can happen in practise, unfortunately it's nothing to be >>>happy about, it just means that your serial search isn't "optimal". >> >>Careful in how you word that. >2 is possible in _some_ positions. But _not_ >>in an average set of positions. That's probably what screws up Vincent's >>numbers, using a single position. I have several that produce > 2 speedup >>with 2 processors. I have several that produce <1.5 speedup with 2 processors >>as well. I've _never_ produced an average speedup > 2.0 for any reasonable >>set of positions... > >No it shouldn't happen in theory. >It happens anyway, so what could be the possible explanation? Bad move ordering. > >I can think of only one, that the move ordering is bad in the single search and >that's causing the parallel to look better because it fixes some of these >issues. correct. > >This is why I don't like just counting nodes or looking at time to solutions, >it's not really a good measure of search efficiency at all. > >When you get time to solution lower than what is theoretical possible the number >doesn't make sense, and actually it never makes sense. >Suppose on a position the theoretical speedup is 1.94 and you achieve 1.96, that >still doesn't make sense but you never realize this because it's lower than >2.00. > >In short, you can never tell if your parallel search looks good because it _is_ >good, or because the serial search is bad, both will look the same in these >experiments. > that is why you use a large set of test positions, so that the good and bad cases end up as extreme points and you get a good average in the middle. >>>I guess a program using reversed move ordering would be quite capable of >>>consitently producing >2 speedups? :) >> >>Yes, but then there is a way to make the serial algorithm faster. Search >>two root moves, one node at a time on move one, one node on move two, bouncing >>back and forth with a single thread. That will run faster than the bad serial >>algorithm, and then the parallel algorithm won't go > 2x faster than that. > >Yes that was the point, a bad serial search can make a parallel search look >good. You can search minimax and get a perfect parallel speedup. > >-S. Did that test in my Ph.D. thesis. And yes, it is pretty easy to get a perfect parallel speedup on minimax, until you get to large numbers of processors of course, as anything is hard then.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.