Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: A slightly different perspective on Alpha Beta

Author: Robert Hyatt

Date: 15:02:16 05/15/99

Go up one level in this thread


On May 15, 1999 at 15:22:59, KarinsDad wrote:

>On May 14, 1999 at 17:35:00, Dan Newman wrote:
>
>>On May 14, 1999 at 15:09:35, KarinsDad wrote:
>>
>>>On May 14, 1999 at 14:25:05, J. Wesley Cleveland wrote:
>>>
>>>>On May 14, 1999 at 13:14:36, KarinsDad wrote:
>>>>
>>>>>
>>>>
>>>>You seem to misunderstand alpha-beta. Alpha-beta *always* finds the same "best"
>>>>move as exhaustive search does at any depth. What it does not tell you is how
>>>>much better the "best" move is.
>>>
>>>Alpha Beta finds the best move "under the same depth and constraints", but does
>>>not find the best move. That is why chess programs are not perfect at this point
>>>in development (since they are restricted from performing exhaustive searches).
>>>
>>
>>Well, yes, chess programs aren't perfect because they can't perform exhaustive
>>searches depthwise.  But I think there must be a misunderstanding of some sort,
>>perhaps in terminology?  The alpha-beta algorithm effectively does the same
>>search that pure minimax does.  Pure minimax is exhausive (widthwise): every
>>node above a certain depth is examined.  Alpha-beta gets the same result but
>>only examines approximately the square root of the number of nodes in the
>>tree that's searched.  This is an enormous win when the depths are large.  In
>>chess it reduces the branching factor from approx 36 to approx 5 or 6.  When
>>people speak of "full-width, exhustive, brute-force" search it's alpha-beta
>>that they are generally speaking of since the result is (more or less) identical
>>to that achieved by pure minimax.  The pruning done by alpha-beta is of an
>>entirely different nature than the lossy pruning done in the quiescence search
>>or by the null move heuristic.
>
>Agreed. I totally understand what you are saying and you are correct, based on
>the limitations that you stated (i.e. using Alpha Beta to a given depth).
>
>I think that I will have a very difficult time explaining my point of view here
>(you touched on it when you agreed that programs cannot search exhaustively
>depthwise).
>
>Lets say that you have a position that you are searching. At each ply below the
>position, you will decrease the number of searched nodes by about a factor of 5
>(in a normal position) using Alpha Beta. So, if you search down 10 ply, you will
>find about 10 million nodes. At 250k nps, this comes out to be about 40 seconds.
>Very doable with today's hardware and software. And, of course, chess programs
>use other techniques in order to search any PV further and some of the other
>less desirable positions even less.
>
>However, you used iterative deepening to get to that 10 ply and you pruned out
>the tree at each level due to alpha beta cutoffs found while searching at lower
>ply. So, you have a "perfect" score on your PV and a range of scores everywhere
>else. This range of scores were calculated using very limited data. Originally,
>they are calculated using the evaluation routine at the leaf node of the depth
>below the one being searched, but eventually, those ranges start to improve as
>the depth increases. Fair enough.
>
>However, since MinMax is a widthwise search (as is Alpha Beta), it drops
>depthwise nodes on the floor as well. And these nodes will very rarely be
>re-searched unless they move over to (or towards) the PV.


this is wrong.  Alpha/beta and minimax are 'depth-first' search algorithms,
not 'breadth-first'.  Ie they search _down_ the tree first, and only then
search across the tree, with every branch searched going to the maximum depth
of the tree.

Alpha/Beta is provably equal to minimax (see Knuth/Moore, "an analysis of alpha
beta pruning" circa 1975.  Alpha/Beta _always_ returns the same score and move
and PV as minimax. _always_ with no exceptions.  Variations come from other
places like the hash table, but _not_ alpha/beta...




>
>So, effectively, what MinMax and Alpha Beta both do is a kind of controlled
>shotgun effect where you are aiming in the general area of your target and you
>will probably partially hit it, but there are no guarantees.

alpha/beta _absolutely_ guarantees to find the same move/score as minimax
when searching to the same depth.  Except that alpha/beta searches roughly
2*sqrt(N) nodes where minimax searches N nodes...  But the score is guaranteed
to be the same as is the best move.



>
>Say, for example, that you had a program with "nearly perfect" move ordering.
>Then, Alpha Beta would work wonderfully. It would quickly dispatch lesser moves
>and you would in the majority of cases, find either the best move in a position
>or the second best move in a position (I'm using these terms loosely here).

again that is a wrong statement.  it will _always_ find the best move, _never_
the second-best move.  In fact, with alpha/beta it is _impossible_ to find the
second-best move because it gets pruned away so quickly...


>
>But, how do you acquire a nearly perfect move ordering. It's tough since it is
>based on the position (yes, a lot of programs use various techniques based on
>similar positions and positions at a similar depth, etc.).
>
>So, you are either stuck with trying to come up with a nearly perfect move
>ordering algorithm, improving your search techniques, and improving your
>evaluation routine. Simple Alpha Beta no longer cuts it alone (you will be
>killed by every other program out there).
>
>Most programs rely on pruning the tree even further with null move, razoring, or
>other techniques. The theory is that if you can search deeper, you will search
>better. In some positions (leading to mate or whatever), this may be true. But
>for a majority of positions, I believe this postulate seems to hold BECAUSE
>people are strickly following Alpha Beta (or some Alpha Beta derivative).
>
>My program is relying on some of these techniques as well (including a modified
>version of Alpha Beta). However, my program also is attempting to pick up some
>of the more interesting depth nodes dropped on the floor due to Alpha Beta. The
>idea is to not miss some of those hidden resources that normally get pruned (and
>to only find them if positions start heading towards PV). That's all.
>
>Without going into detail on how I am doing this, I hope this made some sense.
>
>>
>>>If after a quiescence alpha beta search, you determine that you have lost a
>>>knight for a pawn, most programs do not search much beyond that to find the
>>>forced checkmate 4 moves later and hence do not find the sacrifice. The GM, on
>>>the other hand, may find this.
>>>
>>>What this effectively means is that the alpha beta routines "prune" out lines
>>>which do not look promising, but in actuality are very promising.
>>>
>>
>>That's true, but pure minimax won't see that these lines are promising either.
>>And it won't see as deeply (in the same period of time) because it's uselessly
>>examining huge numbers of nodes that can't affect the result.
>
>Correct. I am not disputing that Alpha Beta is superior to Min Max. It obviously
>is.
>


it is only superior in terms of time, not score/move...




>>
>>>For example, let's take the opening. 1. d4 d5 2. c4 c6 3. Nc3
>>>
>>>Most alpha beta programs would prune out the move c4 at ply 3 since there
>>>appears (via the evaluation routine) for there to be a lot of other moves at
>>>that point which develop pieces and are stronger. This does not mean that a
>>>program may not find this on a re-search or something, however, you have to get
>>>to ply 11 or so to start to appreciate the advantages of c4 and you will never
>>>get there with a standard alpha beta search.
>>
>>You'll never get there with simple minimax, but you will get there with alpha-
>>beta at 120 Mnps in 8 seconds or so.
>
>Yes, I was refering to today's max speeds of 500k nps (single processor) where
>it would take somewhere around 30 minutes. Fine for analysis, but lousy for a
>game.
>

your math is also a bit off.  Crafty normally searches to depth 12/13 in the
middlegame, at a speed of 800K nodes per second..




>>
>>> This is the MAIN reason that
>>>programs without opening books are so weak. The GMs have spent years analyzing
>>>to some depth various openings and have found moves such as 2. c4 which are
>>>strong, but a program would not find these moves in any reasonable amount of
>>>time since it prunes via alpha beta.
>>>
>>>So, in the example of exhaustive 6 ply search, there are many lines dropped via
>>>alpha beta at ply 1, 2, 3, 4, and 5 so that the evaluation routine does not even
>>>get to compare those nodes at ply 6. If a line is dropped at ply 4, how can you
>>>evaluate it's grandchildren at ply 6 or ply 9 or ply 12 and find that it is in
>>>fact actually stronger, not weaker?
>>>
>>
>>In alpha-beta you search from that node down to ply 6 and then evaluate, you
>>don't simply evaluate at that node and drop it if its score is too low.  In
>>fact the nodes you drop are those that are too high...so high that your
>>opponent will choose some other path to avoid that node and thence the disaster
>>that likely lies ahead.  So, you may as well not even finish searching it
>>because you've proven your opponent won't allow you to get there.  This is done
>>recusively throughout the tree, and in this way lots of useless work is avoided.
>>
>>-Dan.
>
>Absolutely.
>
>KarinsDad :)
>
>>
>>>Do you still think that I do not understand Alpha Beta? Min Max and Alpha Beta
>>>are the reasons that other techniques such as null move and razoring, etc. have
>>>been developed. If we were doing exhaustive search to ply 14 and alpha beta
>>>search to ply 20 after that, no human in the world could beat the program and
>>>these other techniques would not be needed.
>>>
>>>KarinsDad :)



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.