Author: Robert Hyatt
Date: 05:15:29 03/03/00
Go up one level in this thread
On March 03, 2000 at 07:28:52, Graham Laight wrote: >On March 02, 2000 at 10:08:43, Robert Hyatt wrote: > >>On March 02, 2000 at 07:04:58, Graham Laight wrote: >> >>>On March 01, 2000 at 23:37:30, Tom Kerrigan wrote: >>> >>>>On March 01, 2000 at 07:37:55, Graham Laight wrote: >>>> >>>>>Pentium processors are a big and competitive market. Trouble is, I don't think >>>>>they're the best architechture to put together in large numbers on the same >>>>>motherboard. >>>> >>>>Intel has been hell-bent on making the world's fastest single processor. >>>> >>>>They seem to be ignoring the fact that several fast processors can be put on one >>>>chip. >>>> >>>>If they were so inclined, I don't think it would be a problem to put 4 >>>>(original) Pentiums on one chip. And there would probably be some space left >>>>over for L2 cache. >>>> >>>>AMD is taking this approach, but I don't know when they will have a product >>>>ready, or how much it will cost. There's no manufacturing reason for such a >>>>product to cost more than a single processor, but I assume they will milk it for >>>>all it's worth. >>>> >>>>-Tom >>> >>>Thanks to everyone for replying - and they're all good, interesting answers. >>> >>>However, what I failed to make clear was this: I wasn't talking about two, four, >>>or even eight processors - I was talking about THOUSANDS of processors! >>> >>>I have read articles in the computer press about companies making multiprocessor >>>boards of this order of magnitude in a low cost way. >>> >>>I think we'll have to wait a long time for the Intel architecture to scale up to >>>that kind of level. Hence my remark that this is a marketing issue rather than a >>>technical one. >>> >>>-g >> >> >>It isn't so easy to do. >> >>IE the best architecture has shared memory. In a 32-processor Cray T932 >>machine, 70% of the _total_ cost of the machine is in the hardware that >>connects the CPUs to Memory. 70%. Leaving 30% for what most would agree >>are very expensive CPUs. >> >>The other approach is message passing. This is _much_ less efficient, and >>using "thousands of cpus to play chess" is not just difficult, but _very_ >>difficult. > >It can't be impossible - just look at the example of animal brains (including >the one you're using right now to interpret this text). Would you say that an >animal brain (or silicon neural network) is doing more "memory sharing" or more >"message passing"? Probably a combination of the two. Note that "very difficult" != "impossible". As far as the human brain goes, I have no idea. That is part of the problem. But the human brain does have a tremendous number of 'connections' internally, which bypasses the biggest problem in message-passing. > >Anyway - to get back to traditional silicon computers: my suggestion is to use a >hiararchy instead of shared memory. > >A simple example of how this could be done would be as follows: processor 1 is >given a task. It delegates parts of that task to processors 2, 3, and 4. >Processor 2 delegates part of its work to processors 5, 6, and 7. Processor 3 >delegates part of its work to processors 8, 9, and 10 - and so on. > That always sounds good. But different parts of the tree are vastly different in the size of the tree produced. This leads to lots of A waiting on B while B finishes something. That has to be solved. In a shared memory machine it is not hard to solve. In a message-passing machine it is more difficult. >Once this has been perfected, I have some even more advanced ideas on how to >progress from there - but I'd like to see what people think of this idea first. > >I suppose this suggestion would classify as "message passing" - but each >processor would only have to pass messages to a small number of other >processors, so it would still be efficient. > >-g In alpha/beta there is too much communication. A tree isn't just a tree in message-flow topology. Scores from one part of the tree can influence other parts of the tree, _IFF_ they are passed around correctly. > >>I doubt that 'clustering' like that is going to work. And shared memory >>for thousands of processors would mean that 99.9999999999% of the total cost >>of the hardware would be in the interconnect. That machine would cost >>billions of dollars.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.