Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Parallel search article RBF

Author: Robert Hyatt

Date: 10:36:35 09/11/02

Go up one level in this thread


On September 11, 2002 at 13:03:12, Vincent Diepeveen wrote:

>On September 11, 2002 at 12:57:20, Vincent Diepeveen wrote:
>
>>On September 11, 2002 at 11:41:13, Robert Hyatt wrote:
>>
>>>On September 11, 2002 at 07:33:56, Gian-Carlo Pascutto wrote:
>>>
>>>>On September 11, 2002 at 00:36:21, Dann Corbit wrote:
>>>>
>>>>>Since the speedup was almost linear, I would say it is better than any [other]
>>>>>known method.
>>>>
>>>>It's 7-15 times slower than alphabeta.
>>>>
>>>>If you start out by being 7 times slower, it's not hard to get good speedups.
>>>>
>>>>--
>>>>GCP
>>>
>>>
>>>that is the point.  And it also takes a huge amount of memory since best-first
>>>has to store the whole tree as it is traversed.
>>
>>They didn't do this at all. In fact they search in a very pathetic
>>way, they search using a selfdefined form of bestfirst search.
>>
>>There is no garantuee they find anything. Obviously such approaches work
>>for tricks with a small b.f., I find their parallel speedup very bad,
>>considering the way they search is ideally parallellizable.
>>
>>Lineair i'd say.
>>
>>It is not clear to what crafty version they compared and at what machine.
>>I get on average over 10 times faster times with crafty to solve something.
>>
>>From memory i say their box had 500Mhz cpu's, crafty was probably run
>>on a box of around 200Mhz i would estimate.
>>
>>Best regards,
>>Vincent
>
>I need to add an important note. in chess they compared search times,
>which in itself looks like a good idea. However they compared in othello
>not search times but number of nodes. Their reported nodes a second
>doesn't seem very fast to me (a few hundreds of evaluations a second),
>and more important is that they compared with 'jamboree' search which
>they implemented themselves using Cilk.
>
>I've been never a big fan of Cilk here. Claiming it is ideal to 'simulate'
>things at 1 processor is a weird statement to me. It is the typical
>academic publication where based upon 2 numbers they had by tossing
>a coin, they conclude things.


I don't think what they did was _that_ bad.

Best-first search is a known search algorithm, and it has a known
weakness that they cover early on.  Their randomized approach is one
way to attempt to minimize that weakness.

Their testing approach with Crafty is somewhat unique and _could_ be
thought of as a "deepblue-type approach" since they expanded nodes and
gave the positions to Crafty and let it do a shallow search to compute
the "value".  Sort of like the deep blue chess processors did for the real
DB software engine.

The idea has some merit, although I am not personally a fan of BF search
with the success alpha/beta depth-first search is producing today.  But
nothing says that approach _won't_ work.  And if the parallel speedup is
good enough, a large number of processors can make an inefficient approach
run fast as hell in spite of the inefficiency.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.