Author: Vincent Diepeveen
Date: 05:30:36 09/13/02
Go up one level in this thread
On September 11, 2002 at 22:09:26, Robert Hyatt wrote: >On September 11, 2002 at 15:27:32, Dann Corbit wrote: > >>On September 11, 2002 at 15:10:49, Gian-Carlo Pascutto wrote: >>[snip] >>>>2. That there is definitely a linear improvement in new CPUs and 64 CPUs is >>>>almost exactly twice as good as 32 CPUs. I think that is pretty astonishing, >>>>and so however it is that communication happens between nodes should be tried >>>>in other systems. >>> >>>It's a natural consequence of the fact that they have (almost) no >>>synchronisation costs. >> >>Well, that's pretty interesting, isn't it? Can the same model be copied to >>other algorithms? >> >>Surely they have to coordinate the information from the entire set of processors >>somehow. The processors cannot be purely independent of one another or no >>resolution would be reached. If a similar speedup could be obtained with >>another scheme, it might be very valuable. >> >>Also, since they have done a test with 64 CPUs, I assume that they are using >>some sort of NUMA architecture, since there are not many 64 CPU SMP systems >>around. Hence, it might scale to stupendous CPU counts like the DoD >>supercomputers with thousands of CPUs. If if the efficiency is 1%, if you have >>10000 CPUs you will be at 100x the speed of a single CPU system. Maybe no other >>system can match that. > > >Two comments to _everybody_ fiddling with this thread. > >1. The best serial algorithm is generally _not_ the best parallel >algorithm. This has been shown over and over and over. The best parallel >algorithm, by inverse thinking, is not necessarily the best serial algorithm >either. Just because we use alpha/beta today doesn't mean we will be using >it in 5 years on massively parallel machines. Quicksort is the best serial >sort today. It isn't necessarily the best parallel sort for large numbers >of processors. > >2. Best-first is not thought much of today, because it has never worked well >in the past. But that doesn't mean it _can't_ work well, just that nobody has >really worked with it anywhere near as much as with Shannon's original minimax >approach later augmented with the now infamous alpha/beta modification... In the general case i don't disagree, but in case of chess i completely disagree, assuming a game playing program. It is well proven that best first search, or any search related to it like for example CNS or whatever, that can work pretty well to solve certain things. We see that they solve some tricks pretty quick with best first search. However, that's a trivial thing to do with best first search. Best first search is *known* to work well for problems where the heuristics are simplistic and the search space easy to define. Good example is for example is junior. Just put these tricks at it and you'll see it finds it within seconds, because it is extending captures and checks more than most other programs do. Yet their search IS a depth limited search, NOT a best first search. If we compare a depth limited program optimized to find tricks with any best first search, the best first search programs get already completely *outgunned* everywhere. The simple proof is, modify the crafty version they used to solve these tricks to a version which gives an extension of 0.5 ply for every capture and every check performed. If you compare that with the teams findings, their best first search thing will look immature in all respects. What they do not realize is that captures and checks limit the number of 'plausible moves', so it is very likely that best first search projects find tricks seemingly fast, whereas the reality is that they suck ass. In case of positional play however, the best first search quality is based upon the quality of the estimation which subdomain to search, so there it works against them. We see that world champion junior's real weakness is that if it is at a depth of 19 ply, some positional lines which are not trivial moves, just sees up to a depth of 5 ply at most, as they have 3 basic moves seemingly (perhaps they want to comment on it?) - good moves costing 1 unit - average good moves costing 2 units - bad moves (such as moving a rook to a non-open file) costing 3 units. Any of such systems completely outgun tactical any best first search approach, and i need to note here that junior definitely tries to find positional problems. A simple version of DIEP which i modified to find just tactics in the above 1,2,3 unit way (which works 100x better for hashtables than fractional extensions, so please do NOT compare it with fractional search depths which are EXTENDING rather than doing other things), it obviously finds all the tactical tricks at lightspeed, despite that it didn't have mating extensions nor singular extensions, which is what their 'comments' is in the article. So throwing rocks at them is not the right thing to do as they prestented their results as they were, but obviously i do not share their conclusion at all and can in fact EXPLAIN why best first search *seemingly* is not too bad with regard to tactics. The real test to do with their program is of course to play 100 games without learning on both sides against other programs with both a depth limited version as we all use, and a best first search version. It says something on how serious we must take their 2 hours project. >I hesitate to throw rocks at people trying new ideas. Sometimes they waste >tons of time, but _sometimes_ they find remarkable results...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.