Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ?

Author: Miguel A. Ballicora

Date: 07:44:45 09/27/01

Go up one level in this thread


On September 26, 2001 at 21:32:42, Robert Hyatt wrote:

>On September 26, 2001 at 19:03:12, Miguel A. Ballicora wrote:
>
>>On September 26, 2001 at 17:56:29, Robert Hyatt wrote:
>>
>>>On September 26, 2001 at 16:53:56, Miguel A. Ballicora wrote:
>>>
>>>>On September 26, 2001 at 16:32:26, Gian-Carlo Pascutto wrote:
>>>>
>>>>>On September 26, 2001 at 16:20:25, Dann Corbit wrote:
>>>>>
>>>>>>But maybe Vincent has come up with a real stunner.
>>>>>
>>>>>We are all eagerly awaiting the proof that 1 + 1 > 2
>>>>
>>>>On the other hand, I do not see a proof that can't be.
>>>>
>>>>In theory... why not? with iterative deepening you have 1+2+3+4+5+6+7+8 < 8
>>>>
>>>>Regards,
>>>>Miguel
>>>>
>>>
>>>That is not the same thing.  That is taking the very serial property of
>>>alpha/beta and using it to advantage by iterating.  Parallel searching doesn't
>>>assist a serial property, it works against it.
>>
>>Of course, once you know how. If I told people loooong time ago that I would do
>>iterative deepening and that will be faster people will say that is not possible
>>(if they don't know the existence of tricks to improve move ordering such as
>>hashtables etc). Easy to explain once you see it.
>
>
>Note that iterative deepening works even without hash tables. Just passing
>the PV moves to the next iteration, + the killer moves, will do a reasonable
>job of ordering moves.
>
>
>>
>>One of the big problems of alpha-beta is that the machine is quite
>>stupid to determine how to traverse the tree and how to prune (pruning is not
>>part of the pure alpha-beta actually). In fact, that is the reason why
>>most of the tree is garbage. Most of the time it would be very useful to know
>>if it is convenient to continue the path or quickly search other possibilities
>>to see if I can terminate the search right now. Under alpha-beta, this kind of
>>decisions are not possible without akward tricks.
>
>actually null-move is one of those awkward tricks.  That is exactly how it
>works...

Yes, and not only it is awkward, but also cut a piece of all the fat
that it is in the tree.
We feel it is great, but IMHO there is a lot to improve in this aspect
(selectivity).

>>Nobody can guarantee that there is parallel algorithm that combined with
>>a certain move ordering and pruning method can make alpha-beta better.
>>Who knows? Breakthroughs are difficult to foresee!
>>The fact that nobody imagined it in decades does not prove anything.
>
>
>That is fine to say.  But the alpha/beta _algorithm_ is by definition a serial
>algorithm.  IE "first find the lower bound for the first move, then use that
>to efficiently dismiss the remainder of the moves."  That is purely serial in
>nature.  Any methodology of speeding that up that doesn't simply improve the
>move ordering is just a random-chance improvement.  The paper by Knuth/Moore
>should be required reading of everyone.  It does a reasonable job of explaining
>why this is true.
>
>I don't see any way that a _parallel search_ can be used to improve move
>ordering.  The search is independent of the move ordering algorithm. One
>traverses the tree, one tries to produce moves in an order that makes traversal
>of the tree optimal in terms of search space.  There is no way to know the
>perfect move ordering until after the search has been completed.  Then it is too
>late to use that knowledge.

Unless you apply it to the next iteration, that's how iterative deepening works.
It is not perfect, but gets better and better.

>>Hard to believe? yes. One has to be skeptical but I do not think that we can say
>>"impossible".
>
>I certainly feel comfortable in saying "2+2=4" and any other result, using
>base-10 arithmetic is _impossible_.  Or that the limit of 1/X as X approaches
>infinity is zero, and any other result is impossible.  I feel just as strongly
>about alpha/beta.  I'm not saying there isn't something better than alpha/beta
>in the future, although I seriously doubt it.  But as far as alpha/beta itself
>goes, if someone produces a speedup > 2, then I can absolutely state that I can
>write a serial algorithm that will be faster than the serial algorithm used for
>the test.  If you can consistently produce a speedup of 3.0, then I can time-
>slice that same algorithm on one cpu and get a speedup of 1.5.  And I can do it
>for _any_ alpha/beta algorithm you do in parallel.  Every time. And using your
>code, it will be easy for me to do it.

That's what I said already in a previous post, but it is another comparison.
1 vs 2 threads, or 1 vs 2 cpus. Anyway, it will be 1+1 = 2 + a little overhead
that I do not have to pay for slicing the time.
I have no idea but I imagine that this little overhead might be negligible but
not cero. That will make it 1+1 > 2 :-) :-)

Regards,
Miguel




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.