Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: parallel scaling

Author: Vincent Diepeveen

Date: 05:14:24 10/30/03

Go up one level in this thread


On October 30, 2003 at 00:44:54, Dave Gomboc wrote:

>On October 29, 2003 at 17:37:08, Robert Hyatt wrote:
>
>>On October 29, 2003 at 14:20:01, Vincent Diepeveen wrote:
>>
>>>On October 28, 2003 at 23:21:55, Robert Hyatt wrote:
>>>
>>>>On October 28, 2003 at 18:12:16, Vincent Diepeveen wrote:
>>>>
>>>>>On October 28, 2003 at 09:48:52, Robert Hyatt wrote:
>>>>>
>>>>>>On October 27, 2003 at 21:23:13, Vincent Diepeveen wrote:
>>>>>>
>>>>>>>On October 27, 2003 at 20:09:55, Eugene Nalimov wrote:
>>>>>>>
>>>>>>>>On October 27, 2003 at 20:00:54, Robert Hyatt wrote:
>>>>>>>>
>>>>>>>>>On October 27, 2003 at 19:57:12, Eugene Nalimov wrote:
>>>>>>>>>
>>>>>>>>>>On October 27, 2003 at 19:24:10, Peter Skinner wrote:
>>>>>>>>>>
>>>>>>>>>>>On October 27, 2003 at 19:06:51, Eugene Nalimov wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>I don't think you should be afraid. 500 CPUs is not enough -- you need
>>>>>>>>>>>>reasonable good program to run on them.
>>>>>>>>>>>>
>>>>>>>>>>>>Thanks,
>>>>>>>>>>>>Eugene
>>>>>>>>>>>
>>>>>>>>>>>I would bet on Crafty with 500 processors. That is for sure. I know it is quite
>>>>>>>>>>>a capable program :)
>>>>>>>>>>>
>>>>>>>>>>>Peter.
>>>>>>>>>>
>>>>>>>>>>Efficiently utilizing 500 CPUs is *very* non-trivial task. I believe Bob can do
>>>>>>>>>>it, but it will be nor quick nor easy.
>>>>>>>>>>
>>>>>>>>>>Thanks,
>>>>>>>>>>Eugene
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>If the NUMA stuff doesn't swamp me.  And if your continual updates to the
>>>>>>>>>endgame tables doesn't swamp me.  We _might_ see some progress here.  :)
>>>>>>>>>
>>>>>>>>>If I can just figure out how to malloc() the hash tables reasonably on your
>>>>>>>>>NUMA platform, without wrecking everything, that will be a step...
>>>>>>>>
>>>>>>>>Ok, just call the memory allocation function exactly where you are calling it
>>>>>>>>now, and then let the user issue "mt" command before "hash" and "hashp" if (s)he
>>>>>>>>want good scaling.
>>>>>>>>
>>>>>>>>Thanks,
>>>>>>>>Eugene
>>>>>>>
>>>>>>>That's why i'm multiprocessing. All problems solved at once :)
>>>>>>
>>>>>>
>>>>>>And several added.  Duplicate code.  Duplicate LRU egtb buffers.  Threads
>>>>>
>>>>>Duplicate code is good. Duplicate indexation egtb tables is good too (note the
>>>>>DIEP ones do not require 200MB for 6 men, but a few hundreds of KB only).
>>>>>
>>>>
>>>>wanna compare access speeds for decompression on the fly?  If you make
>>>>the indices smaller, you take a big speed hit.  It is a trade-off.
>>>
>>>Not really, I need compressed around 500MB for all 5 men. Nalimov 7.5GB.
>>>
>>>What's more compact?
>>
>>Let's compare apples to apples.  You are storing DTM for all 3-4-5 piece
>>files in 500MB?  You just set a new world record for size.
>>
>>Aha.  You aren't storing DTM, you are storing W/L/draw?  Then the comparison
>>is not equal.
>>
>>Either way, my statement stands...
>
>It sounds to me like he's storing enough that he can do DTM with some lookahead
>search.
>
>Dave

Add a note to all this. In 1999 EGTBs would have helped diep really a lot.
In 2002 WM they only hurted performance only. Endgames of engines is well enough
now to get into positions where EGTBs can potentially nail them.

Of course it still happens but the significance of EGTBs for world champs 2003
and further will not be too big.

Nevertheless i am sure Bob is going to deny this, pointing to some toy games at
5 0 at ICC.

Note that Bob is incorrect in his assumption that 7.5GB is better than 500MB and
that against 200MB for indice is better than a few hundred kilobytes.

I always wondered for a year or 10 why bob is doing such assumptions here. I see
it as a classical Hyatt-debating technique.

"let your opponent proof everything from ground up".

Hehehehehe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.