Author: Robert Hyatt
Date: 07:29:52 06/04/98
Go up one level in this thread
On June 04, 1998 at 07:05:03, Guido Schimmels wrote:
>
>On June 03, 1998 at 11:47:59, Robert Hyatt wrote:
>
>>On June 03, 1998 at 07:20:16, Guido Schimmels wrote:
>>
>>>
>>>On June 02, 1998 at 13:03:47, Robert Hyatt wrote:
>>>
>>>>On June 02, 1998 at 10:27:07, Guido Schimmels wrote:
>>>>
>>>>>
>>>>>On May 30, 1998 at 09:56:25, Robert Hyatt wrote:
>>>>>
>>>>>>there are at least a couple of things that could make Fritz far more
>>>>>>sensitive to hash table size than other programs:
>>>>>>
>>>>>>(1) a poor replacement strategy. If this is true, then a larger table
>>>>>>reduces replacement, which would produce better performance.
>>>>>>
>>>>>>(2) using the table for other things besides the normal score/best move/
>>>>>>etc. If this is true, replacing *any* entry could be bad, depending on
>>>>>>what is stored in the table.
>>>>>>
>>>>>>no one knows what Fritz does, but one of the above reasons is almost
>>>>>>certain to be correct. I'd suspect (2) myself, since replacement
>>>>>>strategies are well-known now.
>>>>>
>>>>>Hi Bob !
>>>>>
>>>>>Thoughts on (1):
>>>>>I heard, when the hashtable is only %60 full, Fritz5 already starts to
>>>>>suffer
>>>>>seriously. Sounds like poor replacement strategie to me.
>>>>>Remember Fritz5 is 200,000 nps on Pentium MMX 200. A single instruction
>>>>>added to such a fast program causes measurable slow down.
>>>>>Sophisticated replacement would drop that number significantly
>>>>> - at least if he does q-search hashing which I think he does -
>>>>>Frans Morsch may think, RAM is cheap, it's not worth it.
>>>>>
>>>>>Thoughts on (2):
>>>>>Maybe he stores some carefully chosen flags, who knows.
>>>>>But Fritz uses 64-bit entries (32-bit key / 32 bit
>>>>>move|score|depth|flags
>>>>>I guess), so not much bits left - if any -
>>>>>
>>>>>- Guido
>>>>
>>>>If he really stores a 32bit hash signature, he is really *playing with
>>>>fire*. That is provably bad with some simple simulations or tests.
>>>>Several
>>>>of us ran several such tests a few years ago and posted the results in
>>>>r.g.c.c.. (or r.g.c as it was then). 32 bits is not nearly enough to
>>>>prevent significant numbers of collisions.
>>>>
>>>>I suppose it is possible he does a "single probe" algorithm to drive the
>>>>speed up, but the danger is serious, particularly in endgames where the
>>>>depth can go wild. IE in a king and 3 pawn vs king and 3 pawn game this
>>>>morning I was seeing 40+ ply searches at 20 seconds per move. That
>>>>gives
>>>>plenty of paths to produce collisions that will wreck things...
>>>
>>>>A few years ago>When Fritz has 64 MB hash he'll use 54 bits of the 64-bit hashkey
>>>effectively:
>>>64 MB------------------------------- 26 bit
>>>8 bytes/position---------------- - 3
>>>two tables (black/white)---- - 1
>>>-------------------------------------32 + 22 = 54
>>>(8 bytes/position is definitly true for Fritz, the rest is my guess)
>>>
>>>Collisions might be one reason for Fritz to like big tables so much -
>>>but AFAIK Frans does it the same way on the dedicated hardware he also
>>>programes , where he has only small tables. So maybe he knows some
>>>tricks
>>>we don't :-{ (less than 32 bits for move|score|depth|flags ?)
>>>
>>>By the way, I read an article by Donninger (from 1995 I think)
>>>where he says some desperados use 16-bit signatures in
>>>their tables, but he didn't name them.
>>>
>>>-Guido
>>
>>
>>ok... several do that... when you said 32 bits for hash, I was thinking
>>of
>>32 bits for the complete signature. Not storing 32 bits of a 64 bit
>>hash
>>key. If you do a single-probe algorithm, then storing 32 and using part
>>of
>>the remaining 32 is not bad, so long as the table is big enough to use a
>>reasonable part of the entire 64 bits.
>>
>>but for the non-hash, you need the following:
>>
>>move. can be stored as roughly 8 bits or so, if you use an offset move
>>generator and store an index into the moves rather than the move itself.
>>it is probably possible to reduce this slightly.
>>
>>score. good question on how many bits. 16 would give you +/- 32767
>>which
>>is plenty for a program using centipawns.
>>
>>That leaves 8 bits at least, six to store depth below this node and two
>>bits
>>for the type of position. if he compresses the move to 6 bits, or if he
>>uses less than 16 bits for the score, that would leave a couple of bits
>>for
>>other things like "extend this node based on previous threats found" and
>>so forth.
>>
>>but it's very tight. I use 96 bits myself, to avoid compressing the
>>move,
>>plus I use fractional plies, which means that I need a bigger "draft"
>>field
>>as well...
>
>move:
>While storing moves as indexes allows the highest possible compression
>I can think of, it has two big disadvantages:
> 1. terribly slow
> 2. other representations help to detect collisions by failed
> move validations, equivalent to a few bits of hash signature (how
>many ?),
> an index provides zero information about the position,
> making the compression worthless
>
>...but there is an easy way to store moves as 9 bits, which is what I'm
>doing:
>sliding pieces: to-square (6bits) | direction (3bits)
>king, pawns, knights: from-square | direction
>extraction+validation: switch(PieceOnSquare(square)) {...}
>
while that will work, the "index" approach and your approach will miss
many errors. IE for sliding pieces, "to" is not nearly enough to really
validate the move. And if you do as I do, and make the hash move before
*any* move generation is done, you invite trouble by making illegal
moves
that will corrupt the board and the search itself... All of the above
will fail as often as they work. Even omitting the actual piece being
moved causes many errors,
>draft: (I always call it this way, cause "depth" is a tounge twister for
>me as a German :-) )
>6 bits should be sufficient, even if you use fractional plies.
>You can have a resolution of 1/4 ply for iterations up to 15,
>then you reduce the resolution to 1/2 ply and "shift through the table"
>in order to adapt the entries. For iterations above 31 you finally
>reduce
>the resolution to a full ply.
how would you *know* which resolution you stored? That is problem
number
one since I carry hash entries not only across iterations but across
full
searches as well. And when you start quantizing things, you can
adversely
affect hash matches. IE I'm currently searching to 20.5, but the hash
table has a draft of 20. No use. A draft of 21. may be useful, may be
quantized to a wrong value.
I think this adds too many opportunities for error to make it
worthwhile,
as the hash tables already are a major source of errors as programs are
developed, and missing matches or getting false matches would only make
it
worse..
>So for the most time (except endings) you have a resolution of 1/4 ply,
>which is enough to decide wether you may cut or not. I think you
>don't need the full resolution here. Higher resolutions are needed only
>to allow small increments to add up during long lines.
yes, but there *is* a difference between a 15.5 ply search below this
node and a 15.75 ply search below this node.. otherwise that .25
increment
ought not be in there at all.
>
>So I have 17 bits left for score and flags - enough for me...
>
>I have no information how Frans does this, but you see 64-bit entries
>are easily possible...
>
>- Guido
if you are willing to accept more possible errors and less exact
results,
yes... although I don't know that he even does fractional ply
extensions.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.