Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: move ordering and node count

Author: Uri Blass

Date: 15:37:47 03/29/04

Go up one level in this thread


On March 29, 2004 at 18:12:29, Dann Corbit wrote:

>On March 29, 2004 at 18:06:22, martin fierz wrote:
>
>>On March 29, 2004 at 16:35:29, Dann Corbit wrote:
>>
>>>On March 29, 2004 at 16:22:21, martin fierz wrote:
>>>
>>>>On March 29, 2004 at 15:50:10, Dann Corbit wrote:
>>>>
>>>>>On March 29, 2004 at 10:17:18, martin fierz wrote:
>>>>>
>>>>>>aloha!
>>>>>>
>>>>>>i was discussing this somewhere in a thread, but thought i'd like to make this
>>>>>>question more visible in the hope of getting a good answer:
>>>>>>
>>>>>>everybody knows that with plain alpha-beta, a fixed number of moves N per node,
>>>>>>and perfect move ordering a search to depth D needs
>>>>>>
>>>>>>nodes(depth) = sqrt(N)^(D/2) nodes.
>>>>>>
>>>>>>with absolutely imperfect move ordering it needs
>>>>>>
>>>>>>nodes(depth) = N^(D) nodes.
>>>>>>
>>>>>>a typical chess program gets something like 90% move ordering in the sense that
>>>>>>if a cutoff move exists, it will search it as first move in 90% of all cases.
>>>>>>here's my question:
>>>>>>
>>>>>>can anybody give an estimate for nodes(depth) as function of this move ordering
>>>>>>parameter? obviously, this would also depend on when you find the best move in
>>>>>>those cases where you don't find it first. any kind of model is acceptable, e.g.
>>>>>>you always find it on 2nd, always on sqrt(N)th, always last, at a random number,
>>>>>>whatever. i'm just interested in the general behavior of nodes(depth) as a
>>>>>>function of the cutoff-%age.
>>>>>>
>>>>>>i'd be extremely surprised if nobody ever estimated this, so: has any of you
>>>>>>ever seen or calculated such numbers, and if yes, what do they look like?
>>>>>>
>>>>>>and is there any theory how this would apply to a modern chess program with
>>>>>>nullmove and extensions instead of the plain A/B framework above?
>>>>>>
>>>>>>basically this question of course means: do you really gain anything tangible
>>>>>>when improving your MO from say 90% to 92%?
>>>>>
>>>>>I have not done the math, but I am guessing no matter what king of move ordering
>>>>>you have (purely randome or the pv move every time) you will get something like
>>>>>this:
>>>>>
>>>>>nodes = some_constant * sqrt(mini_max_nodes)
>>>>>
>>>>>If you have random move ordering, then the constant will be very large.
>>>>>If you have perfect move ordering, then the constant will be very small.
>>>>>
>>>>>You will never get worst case unless you try very hard to achieve it.
>>>>>It might be possible to degenerate to mini-max (or very close to it) but you
>>>>>will have to choose the worst possible move at every single turn except the
>>>>>leaves.  I doubt if anyone can do it.
>>>>>;-)
>>>>
>>>>i disagree with your formula. it is definitely not some_constant. it is a
>>>>constant between 1...sqrt(N) taken to the power of D/2. else there would be no
>>>>point in improving move ordering, or at least, not as much as there is :-)
>>>
>>>What if the constant is one trillion?
>>
>>so it's one trillion, then what?
>>let's try:
>>
>>i want to compute your nodes for A/B to depth 2. you say:
>>
>>>>>nodes = some_constant * sqrt(mini_max_nodes)
>>and the constant is one trillion =>
>>
>>nodes = 10^12*sqrt(35*35) >> minimax_nodes.
>>
>>do you see now why your formula is plain wrong? i mean, of course you can define
>>your constant for every depth, but then it's not a constant :-)
>>
>>and the whole point is that this constant *does* scale with depth in a form in
>>which you can give an analytic expression for it, so why not do so???
>>
>>cheers
>>  martin
>>
>>
>>>If you choose the node at random, you will still find the right one by random
>>>chance on the first try 1/n times (where n is the number of moves) and on the
>>>second try 1/(n-1) times, etc..  On average, we won't have to try more than half
>>>of the nodes to find it (the best one).   This will cause a huge reduction in
>>>the number of nodes.
>>>
>>>As you can see, you would have to put forth a stupendous effort to cause minimax
>>>behavior.
>
>It was a number from a hat.
>
>Look here:
>http://www.brillianet.com/programming/artificial_intelligence/tutorials/tut1.pdf
>at figures 3-6
>
>You will see at a glance that every change from random node selection (Hash,
>history, etc.) all resulted in linear speedups.
>
>So the study agrees with my wild guess.

I do not see it.
it is also illogical because at depth 1 there is no improvement and at bigger
depth the improvement starts.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.