Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Is improvement from hash tables in middle game linear or exponential

Author: Vincent Diepeveen

Date: 06:19:16 12/21/03

Go up one level in this thread


On December 20, 2003 at 16:19:43, Robert Hyatt wrote:

>On December 20, 2003 at 10:13:18, Vincent Diepeveen wrote:
>
>>On December 20, 2003 at 08:43:37, Thomas Mayer wrote:
>>
>>>Hi Vincent,
>>>
>>>>I did 2 experiments:
>>>>
>>>>experiment A) I ran diep at 460 processors with 115MB hashtable *in total*
>>>>experiment B) Same diep version at 460 processors with 115GB hashtables.
>>>>
>>>>Note hashtable means transpositiontable here. Each processor had local 4.2MB
>>>>pawnhashtable and each processor had local 32MB evaluation table.
>>>>
>>>>MB = 10^6 , GB = 10^9
>>>>#probes   = 4
>>>>entrysize = 16 bytes
>>>>position  = r4rk1/p1q1nppp/b2b4/2nP4/1P3p2/P1N2N2/B1P3PP/R1BQK2R w KQ -
>>>>
>>>>What is the expected outcome?
>>>
>>>well, there are several unclear facts - e.g. how to usage of 460 processors is
>>>different to the usage of 1 processor etc.
>>>
>>>Anyway, let's try a guess and take the idea of Christoph Theron that hashtable
>>>doubling is about 7 Elo... We have 10 doublings, so 70 Elos expected... Doubling
>>>in speed is expected with around 60 Elos... So I expect a speedup of about
>>>120-150%... How far am I away ?! :)
>>>
>>>Greets, Thomas
>>
>>i don't want any elo answer, that's bullshit of course. Above 12 ply (without
>>forward pruning and with some extensions and checks in qsearch) another ply
>>matters shit. The question asked here is: "what does it matter for search
>>depth".
>
>
>I ran this test a few years ago. The answer for my program was this:

test ran for 10 hours. So it searches like half a trillion nodes which all get
stored to hashtables.

So any hashtable size is going to be very small compared to that... ...also
115GB RAM usage.

At supercomputer i store the qsearch in the own hashtable of each cpu, so that
is the global hashtable but only local used. Each processor allocates

115G / 460 = 250MB hashtable local or so for each cpu.

Then the same thing for 115M / 460.

So in both cases loading factor is *real* big.

>Going from a very small (but not impossibly small) hash table to one that
>is way too big and can store the entire search and then some, made a difference
>of a factor of 2.0 to 2.5 depending on the middle game position.  IE back then
>I went from something like 48K to some big upper bound, and the raw search
>time to a specific depth was at best 2-2.5X faster.  In the middlegame.  Which
>is maybe a ply.  In the endgame it is huge.
>
>Of course, I'll save you the trouble, and say that this is with my crappy
>program, using my crappy hashing algorithm, with my crappy search, with
>my crappy quiescence search that doesn't hash at all, and with my crappy
>evaluation.  So a less-crappy program might do better.  Of course, in your
>case, it would be _better_ if you tested a program without any _bugs_.
>Results produced by a program with significant parallel search bugs is not
>very reliable or interesting.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.