Author: martin fierz
Date: 11:11:48 03/29/04
Go up one level in this thread
On March 29, 2004 at 14:01:20, Dan Andersson wrote: >> >>in any case, you can now assume that you find the best move 1st in the fraction >>F (~0.9), and in the remaining case at random, this leads to >> >>nodes(depth) ~ (N*(1*F+(N/2*(1-F)))^D/2. > > This is a flat distribution for the rest of the moves. A reasonable argument >can be made for some other kind of distribution. > F.ex.: The next move has the success rate of F. Thus the mean number of moves >tried will be close to one for any reasonably large F. > >MvH Dan Andersson yes, obviously other models are very much possible too. i never said mine was correct, just an estimate. it's better IMO to make a wrong model than not to do anything at all :-) i also suggested that other models may be better e.g. the other model i proposed was to use sqrt(N) instead of N/2, which shifts the weight to earlier cutoffs. i think assuming F remaining constant is too optimistic: you often have a very good guess for the best move, because you either have a hash move or a killer move, or you have the "most winning capture". i think that e.g. once you have no more possible winning captures, you don't have such a good success probability any more. i should measure this distribution though! cheers martin >> >>plugging in some numbers like N=20 (reduce from 40, trying to account for >>nullmove?!), D=10, F=0.90 and 0.91 i get >> >>nodes(10, F=0.90) = 79 million >>nodes(10, F=0.91) = 62 million >> >>so for 1% better move ordering i get 22% less nodes?! seems rather excessive. >>let me try again using sqrt(N) instead of N/2 - that still gets a 12% reduction. >>
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.