Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ?

Author: Robert Hyatt

Date: 18:32:42 09/26/01

Go up one level in this thread


On September 26, 2001 at 19:03:12, Miguel A. Ballicora wrote:

>On September 26, 2001 at 17:56:29, Robert Hyatt wrote:
>
>>On September 26, 2001 at 16:53:56, Miguel A. Ballicora wrote:
>>
>>>On September 26, 2001 at 16:32:26, Gian-Carlo Pascutto wrote:
>>>
>>>>On September 26, 2001 at 16:20:25, Dann Corbit wrote:
>>>>
>>>>>But maybe Vincent has come up with a real stunner.
>>>>
>>>>We are all eagerly awaiting the proof that 1 + 1 > 2
>>>
>>>On the other hand, I do not see a proof that can't be.
>>>
>>>In theory... why not? with iterative deepening you have 1+2+3+4+5+6+7+8 < 8
>>>
>>>Regards,
>>>Miguel
>>>
>>
>>That is not the same thing.  That is taking the very serial property of
>>alpha/beta and using it to advantage by iterating.  Parallel searching doesn't
>>assist a serial property, it works against it.
>
>Of course, once you know how. If I told people loooong time ago that I would do
>iterative deepening and that will be faster people will say that is not possible
>(if they don't know the existence of tricks to improve move ordering such as
>hashtables etc). Easy to explain once you see it.


Note that iterative deepening works even without hash tables. Just passing
the PV moves to the next iteration, + the killer moves, will do a reasonable
job of ordering moves.


>
>One of the big problems of alpha-beta is that the machine is quite
>stupid to determine how to traverse the tree and how to prune (pruning is not
>part of the pure alpha-beta actually). In fact, that is the reason why
>most of the tree is garbage. Most of the time it would be very useful to know
>if it is convenient to continue the path or quickly search other possibilities
>to see if I can terminate the search right now. Under alpha-beta, this kind of
>decisions are not possible without akward tricks.



actually null-move is one of those awkward tricks.  That is exactly how it
works...


>Nobody can guarantee that there is parallel algorithm that combined with
>a certain move ordering and pruning method can make alpha-beta better.
>Who knows? Breakthroughs are difficult to foresee!
>The fact that nobody imagined it in decades does not prove anything.


That is fine to say.  But the alpha/beta _algorithm_ is by definition a serial
algorithm.  IE "first find the lower bound for the first move, then use that
to efficiently dismiss the remainder of the moves."  That is purely serial in
nature.  Any methodology of speeding that up that doesn't simply improve the
move ordering is just a random-chance improvement.  The paper by Knuth/Moore
should be required reading of everyone.  It does a reasonable job of explaining
why this is true.

I don't see any way that a _parallel search_ can be used to improve move
ordering.  The search is independent of the move ordering algorithm. One
traverses the tree, one tries to produce moves in an order that makes traversal
of the tree optimal in terms of search space.  There is no way to know the
perfect move ordering until after the search has been completed.  Then it is too
late to use that knowledge.




>
>Hard to believe? yes. One has to be skeptical but I do not think that we can say
>"impossible".

I certainly feel comfortable in saying "2+2=4" and any other result, using
base-10 arithmetic is _impossible_.  Or that the limit of 1/X as X approaches
infinity is zero, and any other result is impossible.  I feel just as strongly
about alpha/beta.  I'm not saying there isn't something better than alpha/beta
in the future, although I seriously doubt it.  But as far as alpha/beta itself
goes, if someone produces a speedup > 2, then I can absolutely state that I can
write a serial algorithm that will be faster than the serial algorithm used for
the test.  If you can consistently produce a speedup of 3.0, then I can time-
slice that same algorithm on one cpu and get a speedup of 1.5.  And I can do it
for _any_ alpha/beta algorithm you do in parallel.  Every time. And using your
code, it will be easy for me to do it.





>
>You said that if there is a >2x speed up is because the sequential algorithm
>is bad. Well, maybe alpha-beta is already the problem. ;-)

No doubt about it.  Alpha/beta is a hard problem for parallel searching.
Because it was formulated as a serial algorithm from the beginning.




>
>Regards,
>Miguel
>
>
>>
>>
>>
>>>>
>>>>--
>>>>GCP



This page took 0.05 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.