Author: Robert Hyatt
Date: 15:20:58 05/30/00
Go up one level in this thread
On May 30, 2000 at 12:57:36, Olaf Jenkner wrote: >On May 30, 2000 at 00:41:00, Robert Hyatt wrote: > >>The headache here is that the definition of "supercomputer" has changed a lot >>over the last 30 years. Today, most of the top-500 list are cluster machines, >>with the IBM SPs right in there at the top. Yet none of those machines can >>really hold a candle to a 10 year old C90 for typical huge matrix mathematical >>modelling programs. It is way harder to get one of these clusters up to a >>monster performance number than it was for an old Cray. So the term has >>changed... As has the marketplace... >> >>Although Cray is still selling the things... >> >Does it mean, there was a slow development in the field of such machines? >They are so important in FEM-Modelling, weather forecast ... >Of course for computer chess I wouldn't use them. It probably means that slowly but surely, people are learning how to do things on 'clusters' that were impossible or impractical 10-15 years ago. 10 years ago, everyone was amazed that on a 16 cpu C90, each cpu could do two 8 byte memory reads, and one 8 byte memory write every clock cycle (clock cycle=2 nanoseconds). The T90 took this to new levels. 32 cpus, each cpu could read 4 8-byte words and write 2 8-byte words per cycle. IE the T90 could sustain 32 * 48 bytes per nanosecond. Roughly 1.5 terrabytes/ second of memory bandwidth. The PC does well to sustain .15 gigabytes/second of memory bandwidth. The difference is huge. BUT, if you can figure out how to distribute the algorithm over hundreds or thoudsands of nodes, you can use hundreds or thousands of those .15 gigabyte/second pipes, and suddenly you are in the same ballpark. For applications that 'fit'.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.