Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New intel 64 bit ?

Author: Vincent Diepeveen

Date: 05:55:00 07/08/03

Go up one level in this thread


On July 07, 2003 at 23:35:45, Robert Hyatt wrote:

>On July 07, 2003 at 14:37:25, Jay Urbanski wrote:
>
>>On July 07, 2003 at 10:48:02, Robert Hyatt wrote:
>>
>>>On July 05, 2003 at 23:37:47, Jay Urbanski wrote:
>>>
>>>>On July 04, 2003 at 23:33:46, Robert Hyatt wrote:
>>>>
>>>><snip>
>>>>>"way better than MPI".  Both use TCP/IP, just like PVM.  Except that MPI/OpenMP
>>>>>is designed for homogeneous clusters while PVM works with heterogeneous mixes.
>>>>>But for any of the above, the latency is caused by TCP/IP, _not_ the particular
>>>>>library being used.
>>>>
>>>>With latency a concern I don't know why you'd use TCP/IP as the transport for
>>>>MPI when there are much faster ones available.
>>>>
>>>>Even VIA over Ethernet would be an improvement.
>>>
>>>I use VIA over ethernet, and VIA over a cLAN giganet switch as well.  The
>>>cLAN hardware produces .5usec latench which is about 1000X better than any
>>>TCP/IP-ethernet implementation.  However, ethernet will never touch good
>>>hardware like the cLAN stuff.
>>>
>>>MPI/PVM use ethernet - tcp/ip for one obvious reason: "portability" and
>>>"availability".  :)
>>
>>Well, there are plenty of MPI/PVM implementations that don't TCP/IP.  MPICH-GM,
>>for instance, and PVM-GM over Myrinet.  If you're planning to use a cluster for
>>chess, I would imagine you'd use the fastest switch available and bypasss TCP/IP
>>for performance reasons.
>
>I have PVM running on our giganet switch, which is faster than myrinet.  But,
>as I said, such clusters are _rare_.  TCP/IP is the common cluster connection,
>for obvious reasons.  And that's where the interest in clusters lies, not
>in how exotic a combination you can put together, but in what kind of
>performance you can extract from a common combination.

the 25 ns and 50 ns are *very* expensive trivially. The biggest latency is
caused however by network cards *whatever* procotol they use.

The fastest non-ethernet implementations still have problems to avoid that
problem. The choice that SGI therefore takes is to implement a hub between the
memory and the processors. That's faster than *any* network card ever will be,
simply because of the latency that the PCI bus takes.

PCI bus can handle around a quarter of a million messages a second. So any
network card is going to suck ass when compared to the fast routers.

So guess why that hub from SGI is so much faster latency... ...note that the
newer version of it which delivers 2 times higher bandwidth is called shub it is
used for example for the itanium2-madisons.

Best regards,
Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.