Author: Christophe Theron
Date: 21:15:44 05/01/00
Go up one level in this thread
On May 01, 2000 at 23:36:30, Robert Hyatt wrote:
>On May 01, 2000 at 22:50:50, Christophe Theron wrote:
>
>>On May 01, 2000 at 21:59:05, Robert Hyatt wrote:
>>
>>>On May 01, 2000 at 20:39:13, Christophe Theron wrote:
>>>
>>>>On May 01, 2000 at 18:50:24, Robert Hyatt wrote:
>>>>
>>>>>On April 30, 2000 at 22:34:17, Christophe Theron wrote:
>>>>>
>>>>>>On April 30, 2000 at 18:42:57, Amir Ban wrote:
>>>>>>
>>>>>>>On April 30, 2000 at 16:39:04, Christophe Theron wrote:
>>>>>>>
>>>>>>>>On April 30, 2000 at 06:47:29, Amir Ban wrote:
>>>>>>>>
>>>>>>>>>On April 29, 2000 at 11:31:09, Eran wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>I am sorry if I said it. Okay I believe you that Junior6 has underpromotion code
>>>>>>>>>>and that's wonderful. Maybe I will consider buying it. Does Junior6 consume
>>>>>>>>>>hashtable memory as large as fritz does? Is having large hashtable memory
>>>>>>>>>>important for Junior6? Is 40 MB hashtable enough for 40/120 games?
>>>>>>>>>>
>>>>>>>>>>Eran
>>>>>>>>>
>>>>>>>>>More memory for hash is good, but Junior is not very sensitive to it and you can
>>>>>>>>>change memory size by order of magnitude without obvious effect on playing
>>>>>>>>>strength.
>>>>>>>>>
>>>>>>>>>The comparison to Fritz is interesting and backward: I believe that Fritz 6
>>>>>>>>>(new) consumes less memory than previous versions and the reason may be a
>>>>>>>>>conversation I had with Frans about this in Paderborn, from which he may have
>>>>>>>>>decided that he doesn't need so much memory.
>>>>>>>>>
>>>>>>>>>Amir
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>I must admit I don't understand what you say...
>>>>>>>>
>>>>>>>>
>>>>>>>> Christophe
>>>>>>>
>>>>>>>
>>>>>>>Then why don't you ask :)
>>>>>>>
>>>>>>>I understood from Frans that he's hashing quiescence nodes. I told him I don't,
>>>>>>>and that I consider it a waste of time.
>>>>>>>
>>>>>>>Amir
>>>>>>
>>>>>>
>>>>>>
>>>>>>I hash quies nodes. I tried both methods (hashing them and not hashing them) and
>>>>>>found that hashing QSearch nodes was definitely better, but not by much. I did
>>>>>>hours of experiments and drew a lot of curves with my spreadsheet in order to
>>>>>>find this.
>>>>>>
>>>>>>It works better even if the hash table is highly saturated.
>>>>>>
>>>>>>That's how it works for me. I don't think that I have a better hashing/replacing
>>>>>>strategy, actually it is rather simplistic. Maybe it's because I do more in
>>>>>>QSearch than both of you do, although I cannot know for sure.
>>>>>>
>>>>>>Maybe I should check again... :)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Christophe
>>>>>
>>>>>
>>>>>I used to hash q-search nodes. I found it reduced the size of the tree by about
>>>>>10%.
>>>>
>>>>
>>>>
>>>>Yes, that's approximately what I measured.
>>>>
>>>>
>>>>
>>>>
>>>>> I also found that by taking it out, I reduced the total search time by
>>>>>15 %. A net gain of 5%.
>>>>
>>>>
>>>>
>>>>That's where I differ. Taking hash probe/updating out of QSearch results in a
>>>>very marginal speed gain (in NPS) for me.
>>>>
>>>>
>>>>
>>>>
>>>>> More importantly, I don't need nearly the hash memory
>>>>>now since over half of the search is not hashed.
>>>>
>>>>
>>>>
>>>>I think there is no saving in memory. In QSearch, if the hash table slot is
>>>>available, I use it and will maybe gain something, and if it's not available
>>>>(because a more important information is stored) I end up not using the hash
>>>>info in QSearch, as you do.
>>>>
>>>>Not hashing in QSearch means you are not taking advantage of all the memory you
>>>>have to help you to decrease the size of your tree. It does not mean that you
>>>>will need less memory.
>>>>
>>>>You do not take advantage of a resource you have, as your result of a 10%
>>>>smaller tree when using HT in QSearch shows.
>>>>
>>>>Of course, the increased NPS in your case justifies your choice.
>>>>
>>>>
>>>>
>>>>
>>>>>My next task is to save some time by getting rid of the hashing/unhashing code
>>>>>in the q-search as well, since it isn't used...
>>>>
>>>>
>>>>Yes, I thought of this possible speed gain. But then you have to write a special
>>>>version of make/unmake that you will use exclusively in QSearch, and design
>>>>another system to check for repetitions for example (as I assume that you use
>>>>the hash key to detect repetitions).
>>>>
>>>
>>>
>>>Not possible in my q-search. I don't do checks. Only captures. As a result,
>>>I don't do repetition checks at all since they can't possibly happen.
>>
>>
>>
>>I understand. So obviously you'll earn some extra % of speed by taking out the
>>hashing code.
>>
>>
>>
>>
>>>>This is maybe worth the effort for you, because you just generate captures in
>>>>QSearch (that means no repetition can happen in QSearch). But my QSearch does
>>>>more than that, so for me it's a real problem. My QSearch catches a lot of draws
>>>>by repetition.
>>>
>>>I did this is CB. I am not sure it is the right thing to do, as several good
>>>programs are using a restricted (or non-existant) capture search. Ferret is
>>>one example, Junior is another. The benefit right now is that I use all the
>>>extra time to extend in the basic search, where I generally encounter fewer
>>>errors than in the q-search.
>>
>>
>>
>>The benefit of the kind of QSearch I do is not big. I do it mainly because I
>>find it more elegant.
>>
>>If it was worse than doing a simplistic QSearch, I would not do it. But as it is
>>not worse, I tend to use the most "elegant" approach (which is a matter of taste
>>in this case).
>>
>>
>>
>>
>>>>You also need to keep the hashing of pawn structures anyway, so all hashing will
>>>>not be taken out of the QSearch make/unmake.
>>>
>>>
>>>right... although that is (in my code) a 32 bit hash, rather than a 64 bit
>>>one.
>>>
>>>>
>>>>
>>>>But I understand that not hashing in QSearch can be a good decision for some
>>>>programs.
>>>>
>>>>
>>>>
>>>> Christophe
>>>
>>>
>>>Bruce and I flipped and flopped on this one. I started off hashing, while I
>>>don't think he did. Then I took it out but he added it. It is very close
>>>to 'equal' in my code. And if it is equal, I prefer not doing it as it doesn't
>>>stress memory nearly as much, and lets me do long searches without needing huge
>>>hash tables...
>>
>>
>>
>>I understand the "stress memory" point.
>>
>>But I don't buy the "need less hash tables" argument. This is just not true.
>>
>>The hash table slots you fill during QSearch will never compete with the slots
>>you fill during the "normal" search (I do not know how you call it).
>>
>>If you use the standard replacing scheme which uses the depth of computed
>>subtree as the priority factor of a given slot, then it's obvious that a node
>>computed in QSearch will never replace a node computed during the normal search,
>>nor prevent a node computed in the normal search to replace it.
>>
>>By not sending QSearch nodes into the hash table, you are NOT allowing more
>>space for the other nodes. You are just wasting memory space. I understand that
>>the time saved in the process can justify it, but it's wrong to think that you
>>are saving a single byte of memory.
>>
>>
>>
>> Christophe
>
>
>
>This is a well-known bug. If you use totally depth-prefered replacement, you
>run into a big problem, in that the table (in a deep search) gets filled with
>'deep positions' and the positions near the tips don't get stored.
I guess that now you know which replacement strategy I'm using! :)
> Yet they
>are _critical_. My approach is the one used by Ken Thompson, that of using
>two tables, one depth-prefered, one always-store.
OK, so in this case you are right, storing QSearch nodes does reduce the space
for the other nodes, in the "always replace" table at least.
>Probably the best way to compare two programs is to do the following: Pick a
>position, and search (using the same hardware) for 300 seconds, while varying
>the hash table for both from something very small, to something very big. For
>each program, you will find a point where the improvement slope drops sharply
>and levels off. We need to test a q-search prober vs a non-q-search prober.
>
>I can run the test for Crafty if you want... and can run from something tiny
>up to 384M max...
I can do the same, but I'll not be able to use more than 16Mb of RAM, as my
computer has only 32Mb.
I have already done this kind of experiment and plotted the curves with a
spreadsheet. Actually I have done the experiment with 2 versions of my own
program: one using HT during QSearch, the other one not using HT during QSearch.
The programs are otherwise totally identical.
The experiment was not done on only one position, but on 40 positions. The
positions were taken from 2 actual games (20 positions of the first game, 20
positions of the other game) played by an old version of Chess Tiger against
Rebel Decade, one with white the other one with black. The positions are
consecutive ones, and range from early middlegame to early endgame.
The positions have all been searched to a fixed depth of 8 plies. The size of
the hash table ranged from 32Kb to 8Mb. I have checked that the point where the
slope of efficiency drops exists in this range of hash table sizes.
The curves are interesting and show that, for my program, using the HT during
QSearch is always better, and surprisingly is much more interesting when... the
HT begins to be saturated!!
If you are interested, I can send you the small Excel4 file (or export it to
text file).
I would be interested in the curve that could be drawn from the experiment you
are describing.
Christophe
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.