Author: Uri Blass
Date: 01:41:33 09/26/03
Go up one level in this thread
On September 26, 2003 at 01:03:33, Tony Werten wrote: >On September 25, 2003 at 13:02:22, Tord Romstad wrote: > >>On September 25, 2003 at 11:28:55, Robert Hyatt wrote: >> >>>On September 25, 2003 at 09:48:33, Tord Romstad wrote: >>> >>>>On September 24, 2003 at 16:28:57, Robert Hyatt wrote: >>>> >>>>>I try to use _most_ of main memory for serious games, and if you have a >>>>>1 gig machine, I generally use something like hash=784M, hashp=40M, >>>>>cache=128M, and go from there... >>>> >>>>Interesting. Is a 40M pawn hash table really useful for Crafty? How big >>>>are your pawn hash entries? My pawn hash table contains just 256 entries, >>>>where each entry is 128 bytes. The last time I tried, increasing the size >>>>of the table gave just a very small speedup (less than 2%, if I recall >>>>correctly). >>>> >>>>Tord >>> >>> >>>I've never carefully tested this, but 256 entries seems _way_ small. Just >>>look at how many different possible pawn positions there are. >> >>I decided to experiment with this again. I let my engine analyze the >>position after 1. d4 d5 2.c4 e6 3. Nc3 Nf6 4. Bg5 Be7 to a depth of >>10 plies with different pawn hash table sizes. Here are the results >>(the first column is the number of entries, the second column is the >>number of seconds needed to complete 10 plies): > >You should add 0 entries I think because going from 0 to 1 will give you the >biggest speedup. > >My guess is that the searchtime for 0 will be even above 90s > >1 entrie will already mean that you remember the pawnstructure of the parent >node, wich is most usefull. Specially if you exclude silly captures (QxP, PxQ ) >in quiescence. I doubt it No entries means that I update incrementally the pawn structure in make move and unmake move. pawn hash tables means that I need to calculate everything from scratch because I do not calculate information in unmake move so it may be slower. I already went back to the code before hash tables because in the previous code I calculated information that it is probably better not to calculate. I had for example an array 8*64 that give me for every queen direction and every square in the board the square that blocks it from every queen direction and I updated this array incrementally. I thought that this array may help me to calculate things faster(for example in my move generator I did not need to check every time if the square that I goto is empty) but it seems that it is bad to use often big arrays. In my old code I have even bigger array that tell me the square of the attacker for every direction(16 directions including knight directions) and every square but I do not use that array so often so it is less expensive. I again do not know if I will try pawn hash tables again because my experience with it is negative(lot of work, no significant change with speed and loss of information). Without pawn hash I have information like squares that are attacked by 1 pawn and square that are attacked by 2 pawns thst today is not used in my program. With hash I lost that information because I did not store it. Of course I can change it but it could make me again slower. Only after storing passed pawns and isolated pawns for hash tables I read in Gerd's post that it is a bad idea to use pawn hash tables for cheap things like that and today I do not have more expensive things in my pawn structure calculations. People who use pawn hash tables only in the qsearch may earn a lot because remembering all the information only in the leaves is cheaper but if I call the information from hash tables at every node that it is changed it is not clear that it is faster. It is the same discussion as the discussion of make unmake and copy. when copy is often slower than make move. I do not save the copy when I use pawn hash tables even in case that I have 100% hits. Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.