Author: Miguel A. Ballicora
Date: 11:23:15 07/14/01
Go up one level in this thread
On July 14, 2001 at 02:16:17, Bruce Moreland wrote:
>On July 14, 2001 at 01:59:31, TEERAPONG TOVIRAT wrote:
>
>>Hi,
>>
>>I've been trying to reduce my hashtable size,so that I can gain
>>more entries. At first,I try to pack three structure tag into one
>>integer it looks like this...
>>
>> new integer = (flag<<a)|(depth<<b)|(value)
>>
>>It fails because value is signed int the rest are unsigned.
>>Could anyone solve the problem?
>>
>>Thanks in advance,
>>Teerapong
>
>Use a 16-bit integer for the value and two chars for the other stuff. That gets
>you into 32-bits without screwing anything up. Be forewarned that 16-bit
>integers are evil, so don't use the things repeatedly in inner loop. "Ooh,
>these things are smaller, so the processor must like them better, so I'll make
>all my ints into shorts whenever possible", is the utterance of someone who is
>going to spend a confused afternoon wondering why they got 25% slower.
>
>Also, if you are going to use these things, put the two chars together so your
>struct is char-char-word or word-char-char, but never char-word-char, which will
>either cause the structure to pad (I believe to 48 bits in this case) or will
>cause problems due to bad alignment if you get the structure to pack.
This is clean and bug-free way to do it; however, since the padding or the
absence of it is not guaranteed, I will add that the size of the hashtable entry
should be checked with (sizeof HASH_STRUCTURE == WhateverWeExpect) and allocate
the number of entries according to the result of sizeof rather than a #define
value. Otherwise, it could break in some compiler.
To force the padding I think that it could be done with a structure that
internally its only member is an array[2] of a union that contained either
a short int (assuming is 16 bits) or a char[2] (assuming 8 bits each);
Of course, this might sound a bit more complicated.
typedef union {
char c[2];
short int si;
} field_t;
struct hash {
field_t f[2];
} hashentry;
Then you assign
hashentry.f[0].c[0] = flag;
hashentry.f[0].c[1] = depth;
hashentry.f[1].si = value;
I think that this this should work in any compiler and sizeof(hashentry) should
be 32 bits (4 bytes actually) as long as char is 8 bits and short int is 16 bits
which is a very common assumption nowadays.
I course I could be wrong since I wrote this offhand as an exercise for myself.
>
>Finally, watch out, because 16 bits is +/- 32767, and you could get killed on
>integer overflows if you have a mate score that's near there (as Steven J.
>Edwards has decreed in his EPD standard), especially if you ignore my other
>advice and use more of these dinky elements of death.
I won't have that problem since MATE is 32000 (just paranoia to be so close to
32767) in my program but I am curious why you would have overflow problems if
MATE should be the maximum possible value? Is it because of the way you adjust
the mate values?
Regards,
Miguel
>
>bruce
This page took 0.22 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.