Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ?

Author: Robert Hyatt

Date: 14:55:23 09/26/01

Go up one level in this thread


On September 26, 2001 at 16:20:25, Dann Corbit wrote:

>On September 26, 2001 at 15:18:37, Robert Hyatt wrote:
>[snip]
>>The question is, however, is "how does it do on 2 processors compared to the
>>best sort algorithm on 1?"  or "how does it do on 1 processor compared to the
>>best sort algorithm on 1?"
>>
>>And none of this really has anything to do with chess.  Alpha/Beta is a well-
>>defined algorithm.  There is no "flexibility" in how it is done, because it is
>>a simple mathematical expression of a search space (the minimum search space)
>>that must be traversed to produce a result.  Nobody claims that someone might
>>not one day develop a better algorithm to do searches in parallel.  Something
>>not based on alpha/beta at all.  But right now we are all using the _same_
>>basic algorithm, whether serially or in parallel.  And _that_ makes super-linear
>>impossible to accept on a regular basis.  It just won't happen.
>
>Maybe Vincent has made some improvements.  There are some obvious things that
>come to mind.
>
>Consider a shared memory alpha and beta...
>The search could be modified mid-stream if there is a fail-low or fail-high for
>all the processors instead of discovering the fact only after you have completed
>your search.

That was in Cray Blitz from day one of the DTS algorithm.  Nothing new there
at all.


>
>I also wonder why a program with a shared hash table does not do better than a
>single CPU, since both CPU's are adding data to the hash table and therefore
>both programs can find answers.  Surely there are transpositions from different
>points of the search.  Of course, a twice as fast CPU would accomplish the same
>thing, but we don't have many 2GHz CPU's yet.

It is all probability-based and involves race conditions as well.  One thread
has to finish first and deposit a hash result that the other can use.  This
happens in a serial search automatically and is why we solve fine 70 before we
reach the 26 ply depth needed to _really_ see winning a pawn.

However, for every case where that is good, there is a case where it is also
bad.  A second thread can also leave you a score that makes you do _more_
work than the serial version because you now know more than you did when doing
it serially.

>
>Or consider...
>While CPU #1 is searching the root, CPU #2 is on the pm (whatever that is).
>This would benefit a lot more if we have a lot of CPU's who can split the job of
>searching the root, and another set of one or more CPU's which can search the
>ponder move.  Often, by advancing to the solution move, results can be found
>much faster than by searching the root -- even for ten times as long.

That doesn't sound reasonable.  We want _all_ the hardware to search the tree
together.  Not some doing one thing, some doing something else...


>
>In other words, we are assuming that Vincent has not made an innovation.  Now,
>all three of the ideas I put forth might be stupid.  But maybe Vincent has come
>up with a real stunner.

There isn't something that will take the best-known serial algorithm and turn it
into a super-linear speedup.  And alpha/beta is serial by definition...




This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.