Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Chess pc program on super computer

Author: Vincent Diepeveen

Date: 05:16:24 08/04/05

Go up one level in this thread


On August 04, 2005 at 02:50:32, Mimic wrote:

>On August 04, 2005 at 02:37:20, Mark jones wrote:
>
>>Can you imagine how would Junior,shredder,fritz would have played if they where
>>deployed on A super computer like this:
>>http://www.top500.org/sublist/System.php?id=7605
>>
>>If this were possible not only it would kill all the humans I think it would
>>have crushed Hydra to...
>>What do you thisk about it? And was there an attempt to deploy an pc program
>>on asuper computer?
>
>
>How many Rpeak (GFlops) or Rmax (GFlops) on a normal personal computer ?

Opteron delivers 2 flops per cycle per core.

without using calculator:
So a 2.2ghz dual core opteron delivers 2.2 * 2 * 2 = 4 * 2.2 = 8.8 gflop
So a quad opteron delivers 4 * 8.8 = 35.2 gflop

However the comparision is not fair. IBM is always quoting single precision
calculations whereas majority of researchers uses double precision floating
points.

If you want to know exact definitions of what is a double precision floating
point, look in the ansi-C definitions.

In reality the researchers assume 64 bits 'double times a 64 bits double
delivering a 64 bits double.

In reality single precision is less than 32 bits times 32 bits delivering less
than 32 bits worth of information.

Major cheating happens in those areas of course, for example the highend
processors like itanium2, intel forgot to put a divide instruction at it.

So they can do divisions in certain test programs faster by using some
approximation algorithm delivering less decimals.

So all those gflops mentionned are basically multiplication-add combinations.

The CELL processor is supposed to deliver 256 gflop single precision, this is
however less than 30 gflop double precision.

In reality software isn't optimal so it will be less than 30 gflop.

Still it is impressive for a processor that is supposed to get cheap.

The expensive itanium2 1.5Ghz delivers for example 7 gflop on paper. That's also
paper. SGI when presenting results at the 1 juli 2003 presentation of the 416
processor itanium2 1.3Ghz cpu, made public there that effecitvely it is
2 times faster in gflops for most applications than the previously 500Mhz MIPS
R14000.

On paper the MIPS delivers 1 gflop at 500Mhz and on paper the 1.3Ghz itanium2
delivers 5.2 gflop.

Practical 2 times faster according to SGI.

NASA had a similar report initially for their own software when running at a 512
processor partition.

So all those gflops you have to take with some reservation. Reality is those
supercomputers usually idle for 70% in the first year, they idle 50% in the
second and 3d year, and when they are outdated in the 4th year they are idle for
30%. That is of course all reserved times added and all 'system processors' not
taken into account. In reality they idle more.

So many of those supercomputers are paper hero's which the researchers litterary
use to "run their application faster than it would run on a pc".

There is real few applications that are utmost optimized. Certain matrix
calculation type libraries are pretty good and are pretty optimal for it.

For those researchers those gflops *really* matter.

You can count them at 1 hand.

What matters is they have the POSSIBILITY to run their application real real
fast if they want to, and that is real important.

This big 12288 ibm supercomputer 'blue gene' boxes (6 racks of 2048 processors)
has a cost price of just 6 million euro.

That's real little if you consider the huge calculation power it delivers for
those researchers it matters for.

Best usages of those supercomputers are nucleair explosions (i did not say
university groningen is running nucleair simulations) and calculating for
example where electrons in materials are.

Usually the homepage supports 'biologic' supercomputers. In reality just real
little system time goes to medicines and biologic research, about 0.5% system
time, according to this supercomputer report europe (including all scientific
supercomputers in entire europe).

Amazingly much system time goes to all kind of weather or extreme climate
simulations. Like they have been calculating world wide so so much already at
what the height of the seawater will become. I became real real sick from that,
as i could not test diep until the world champs itself, because some weather
simulation was nonstop running.

After they had run at 350+ processors for months (450000 cpu hours or so,
according to official project papers) and after they had created a new discovery
series from the output that sea water would rise 1 meter the coming 100 years,
they discovered a small bug in the initializing data.

They had initialized the sea water 1 meter too high when starting the test half
a year earlier.

This was the reason diep ran buggy the first 7 rounds in world champs 2003.

Vincent






This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.