Author: KarinsDad
Date: 00:06:09 08/28/99
Go up one level in this thread
On August 27, 1999 at 23:07:20, Robert Hyatt wrote: >On August 27, 1999 at 17:51:53, KarinsDad wrote: > >>On August 27, 1999 at 16:09:21, Robert Hyatt wrote: >> >>[snip] > >if your math is right, doesn't that mean that 95% or more of your tree is not >'linked'??? ie how does the parent/child pointers work if you can only store >3% of the tree in the first place??? Good question. The following description contains a potential answer. It also has some assumptions that are explained later, so you may want to read it all through first. The links are removed as the tree is pruned (and the average node size of 128 bytes includes the links). A 100MB hash table is 800,000 nodes or so at 128 bytes per node. So, I have to get rid of a LOT of the tree. Since I am not doing normal Alpha Beta, but going bottom up, I came up with a solution that seems like it might do a good portion of this. What I do is search up to a given ply based on how much average time I think I have. For example, with 3 minutes per move standard game, I estimate that I can go about 9 ply at 100,000 NPS and a branching factor of 5. So, I search 9 ply up one limb. At the 9th ply, I average 5 children that I search (note: the first lookup searches all 36 or so 9 ply children in order to get an initial score). I then recursively go up and down the tree continiously to the 9th ply. Once my tree gets 100% full (I may have to drop this to 95% full or something based on how well the table fills), I then go back and remove all 9 ply nodes that are not the best score for a given 8th ply node, OR fall below a certain score threshold. Ignoring the second criteria here for a second, what this means is that I remove a TON of 9th ply nodes, but keep the tree intact for the first 8 ply. With a branching factor of 5, that means 36 ply 1 nodes (on average) + 36 * 5 ply 2 nodes + 36 * 5^2 ply 3 nodes + ... 36 * 5^7 ply 8 nodes + 36 * 5^7 ply 9 nodes (only 1 ply 9 node for each ply 8 node). This works out to 6 million nodes in the table (of course, the table can only hold 800K, but bear with me for a moment). Now, since the program does pondering, a hash tree of ~40K nodes or so already exists (pondering creates a close to 800K node tree, but a good portion of that will be removed on average when the opponent makes his 1 of 36 moves; the rest of the moves and their children will be removed; and, this assumes that the opponent made one of the 5 or 6 best moves that my program considered, otherwise, it may be as low as 5K nodes or so left over). So, there should be a fairly decent sort on the ply 1 nodes to begin with. Given the assumption of a good move order is what allows me to assume a branching factor of 5 (or lower; I haven't yet tested this algorithm out due to a bug in our code that we are trying to get rid of). Anyway, after the first ply 1 branch is searched all the way out to 9 ply, about 490K nodes (5^8 + 5^7 + ...). have been searched. While the second ply 1 branch is being searched, the 100% hash table full criteria forces 80% (only the best ply 9 child node is needed for each ply 8 parent, so 80% * 5^8) of the ply 9 nodes from the first ply 1 branch to be removed or 312K nodes removed. This drops us back down to about 490K nodes (coincidentally). So, about 4 first ply branches can be searched before the table is relatively full (490K - 310K = 180K nodes per branch * 4 branches = 720K). So, about 720K out of the 800K is taken up on the first 4 moves (after pruning 9 ply nodes). Now, a few of these moves should normally be ok, but at least one should have a lower score. Based on how much lower the score is determines how much I start pruning relatively useless 8 ply nodes out of the lowest score branch (again, keeping the best 8 ply node for each 7 ply node). I continue to do this over and over again, everytime the hash table fills up. Effectively, I can then have a tree that is 36 branching factors at ply 1, 5 branching factors complete from ply 2 to ply 7 and 1 branching factor to ply 8 and 1 branching factor to ply 9. This works out to about 540K nodes. I can prune the LOUSY branches even more and research amongst the good branches (and I may even not prune as heavy the first time around in the good branches, but that is questionable since I may not know which are the good branches). Now granted, these calculations are cutting it close (for example, a full ply 9 search would yield 18 million nodes or 180 seconds out of the 3 minutes). But, the calculations also ignore several things. One, it assumes that there are no transpositions found in the table which we know is totally false. Hence, the table will fill up slower (and we could get to 8 or 10 ply 1 searches before having to prune the table at the 8 ply level) and conversely the pruning will actually be less effective. Two, I think that the move order will continue to get better, the farther I move over in the tree. So, the average branching factor may drop. And, if I drop the child links from the node, I will save about 15%. In other words, I could increase the 800K nodes with a 100 MB hash table to 940K nodes or so. But, if I do this, it will decrease my NPS (due to time required to search for the child nodes). One detriment to this algorithm is that due to searching down one limb at a time, the frequency of hitting the hash table may decrease as compared to Alpha Beta, I don't know. Only trying it will tell. And, this is all relatively theoretical and slightly complex at this point (lot's of variables of which I am making assumptions as to how they will work out, but as programmers, we know that things do not always work the way we predict). Well, it's late and I'm not sure how well I did the math or even if this makes sense. But, let me know what you think. KarinsDad :)
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.